IaC vs. ClickOps
Clicking things (ClickOps) is the common practice of using graphical interfaces to get things done. Fox example, creating an LXC container in Proxmox means starting from the Create CT button on the main screen, set the desired resources, install packages and configure SSH, Docker and firewall rules by hand, then repeat for each new container.
I'm working to minimize this approach replacing it with Infrastructure as Code wherever possible. I'm still a beginner, but I'm having a blast!
TLDR
I'm experimenting with orchestration tools to make my infrastructure automated, reproducible and documented. In this post I explain how to create a reusable LXC template in Proxmox.
My infrastructure
Most of the creation and maintenance of resources on my server happens through Opentofu for generating LXCs and virtual machines, Ansible for managing system updates and service backups.
Each service has its own docker-compose file in a git repository hosted on my Forgejo instance.
Every day Renovate checks for available updates for these services. If updates are indeed found, it creates a pull request with the necessary changes. After reviewing the release notes I can merge the PR, then Komodo takes care of updating the affected services.
There's still one manual step: after creating a container with OpenTofu, I need to connect to its shell via Proxmox's interface. Why? The Alpine template I use doesn't include Python, which is required for Ansible to complete the initial setup.
The solution is pretty straightforward: I finish the setup in a new container, convert it into a template and I'm done! There are just a couple of things to keep in mind to make the process smoother.
I’m applying a similar (and perhaps even more interesting) concept to my laptop. I use a custom image built with BlueBuild based on Bluefin. The repository is available qui.
Creating container templates in Proxmox
LXC configuration
Start by creating a new Alpine LXC that will serve as the base template for all future resources:
-
connect to the container's shell and configure it as needed:
# System update and base software installation apk update && apk upgrade apk add python3 openssh doas bash bash-completion shadow curl vim nano \ docker docker-compose openrc # Tailscale setup curl -fsSL https://tailscale.com/install.sh | sh # Timezone (adapt to your own) setup-timezone -z Europe/Rome # Enable SSH and Docker rc-update add sshd rc-update add docker boot rc-service sshd start rc-service docker start # Configure doas for wheel group mkdir -p /etc/doas.d echo "permit persist :wheel" > /etc/doas.d/20-wheel.conf chmod 644 /etc/doas.d/20-wheel.conf # Create non-root user (docker and wheel group) adduser -D -u 1000 -G docker -s /bin/bash username echo "username:temporary" | chpasswd addgroup username wheel # Setup SSH mkdir -p /home/username/.ssh chmod 700 /home/username/.ssh touch /home/username/.ssh/authorized_keys chmod 600 /home/username/.ssh/authorized_keys chown -R username:usergroup /home/username/.ssh # Hardening SSH sed -i 's/^#*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config sed -i 's/^#*PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config rc-service sshd restart -
add the required SSH keys for the newly created user;
-
shut down the container and remove the network interface with this command:
sudo pct set <CID> --delete net0.
Creating the template
To create the template, simply generate a backup in your template directory (in my case /tank/isos/template/cache) using the command:
vzdump <CID> --mode stop --compress gzip --dumpdir /tank/isos/template/cache/
Rename the resulting file (e.g. mv new_vz_dump.tar.gz custom_alpine_3.23.tar.gz) and the template is now ready to use!
Unlike creating a template through Proxmox's UI, this method doesn't destroy the original LXC, which can be deleted or reused if you need to update the configuration later.
Creating new LXCs from the template
Now generate a new LXC using Opentofu, pulling from the custom template instead of the generic Alpine. Once created, shut down the container in order to add the two configuration lines required for Tailscale to work:
# /etc/pve/lxc/<CID>.conf
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
Finally create a new Ansible playbook to run system updates, connect the host to Tailscale and change the user password (the root password is automatically generated by Opentofu).
Make sure
passlibis installed:sudo pacman -S python-passlib
lxc-setup.yaml
---
- name: LXC initial configuration
hosts: new_lxcs
remote_user: username
become: true
become_method: doas
gather_facts: true
vars_file: [vault.yaml]
vars:
tailscale_auth_key: "{{ vault_tailscale_auth_key }}"
tasks:
- name: Update all packages
apk:
update_cache: true
upgrade: true
- name: Ensure Docker is running
service:
name: docker
enabled: true
state: started
- name: Check if tailscale already authenticated
command: tailscale status
register: tailscale_status
failed_when: false
changed_when: false
- name: Authenticate and start Tailscale
command: tailscale up --operator=username --auth-key={{ tailscale_auth_key }}
when: "'Logged out' in tailscale_status.stdout or tailscale_status.rc != 0"
register: tailscale_up
- name: Display Tailscale status
debug:
msg: Tailscale is now connected
when: tailscale_up.changed or ('BackendState=Running' in tailscale_status.stdout)
- name: Set user password
user:
name: username
password: "{{ new_host_sudo_pass | password_hash('sha512') }}"
hosts.ini
[new_lxcs:vars]
ansible_user=username
ansible_become=yes
ansible_become_method=doas
[new_lxcs]
new-host ansible_host=new-host-ip ansible_become_pass='temporary'
vault.yaml
new_host_sudo_pass: your-password
vault_tailscale_auth_key: tskey-auth-xx..x-yy..y
Run the playbook with:
ansible-playbook -i hosts.ini lxc-setup.yaml --ask-vault-pass
At this point the container should be ready to go!
Final steps
Update the hosts.ini file:
- move the new host out of the
[new_lxcs]group; - replace
ansible_become_pass='temporary'withansible_become_pass={{new_host_sudo_pass}}.
It surely is not the cleanest solution, but at 2 AM my brain is starting to give up.
Conclusion
The wheel hasn't been reinvented with my work. All I did was apply established IaC principles to my small homelab, which is now defined in versioned code, documented and reproducible.
I still have a long way to go, but seeing everything take shape from configuration files instead of click-heavy sessions is incredibly satisfying (·ω·)