Mount Point Naming
Technical Background
By default, when using Hetzner Cloud volumes with automount = true
, Cloud-init generates a random mount point like /mnt/HC_Volume_<id>
.
In many production systems, predictable mount points (e.g., /volume01
) are preferred for simplicity and automation.
To achieve this, volume and server creation must be decoupled:
- The volume is created independently from the server.
- The server attaches the volume but does not automount it.
- Cloud-init handles directory creation, /etc/fstab
entry, and mount execution.
This avoids a cyclic dependency (server -> volume -> server
) and ensures Cloud-init receives all required parameters for correct mounting.
Note
Hetzner's hcloud_volume_attachment
resource allows attaching a volume after server creation without requiring automount
.
Solution
Prerequisits
Create a main.tf
, outputs.tf
, /tpl/*
variables.tf
, network.tf
,providers.tf
and secrets.auto.tfvars
like in 15 Partitions And Mounting.
Edit Volume
- Disable automount for the volume resource inside
volumes.tf
:
resource "hcloud_volume" "volume01" {
name = "volume1"
size = 10
location = "hel1"
server_id = hcloud_server.web.id
format = "xfs"
automount = false
}
- Add volume attachment to
volumes.tf
:
resource "hcloud_volume_attachment" "volume01_attachment" {
server_id = hcloud_server.web.id
volume_id = hcloud_volume.volume01.id
}
Passing Parameters to Cloud-init
Edit your user data local_file
ressource inside your main.tf
to contain:
resource "local_file" "user_data" {
content = templatefile("tpl/userData.yml", {
host_ed25519_private = indent(4, tls_private_key.host.private_key_openssh)
host_ed25519_public = tls_private_key.host.public_key_openssh
devopsSSHPublicKey = hcloud_ssh_key.loginUser.public_key
volume_id = hcloud_volume.volume01.id
})
filename = "gen/userData.yml"
}
Updating Cloud-init Template
Modify the tpl/userData.yml
by adding these:
runcmd:
# Wait until the volume appears (max 20 seconds)
- |
for i in $(seq 1 20); do
if [ -e /dev/disk/by-id/scsi-0HC_Volume_${volume_id} ]; then
echo "Volume is available."
break
fi
echo "Waiting for volume to appear..."
sleep 1
done
# Create mount point
- mkdir -p /${volume_name}
# Trigger udev so volume is discoverable
- udevadm trigger
# Format the entire volume with the XFS file system
- mkfs.xfs -f /dev/disk/by-id/scsi-0HC_Volume_${volume_id}
# Write mount line into /etc/fstab
- echo "/dev/disk/by-id/scsi-0HC_Volume_${volume_id} /${volume_name} xfs discard,nofail,defaults 0 0" >> /etc/fstab
# Reload systemd and mount volumes
- systemctl daemon-reexec
- mount -a
# Create Partitions
- fdisk /dev/sdb
- mkfs.ext4 /dev/sdb1
- mkfs.xfs /dev/sdb2
- mkdir /disk1 /disk2
- mount /dev/sdb1 /disk1
- mount /dev/sdb2 /disk2
# Fail2ban setup
- systemctl enable fail2ban
- systemctl start fail2ban
- >
echo "[sshd]
enabled = true
port = ssh
logpath = %(sshd_log)s
maxretry = 3
bantime = 3600" > /etc/fail2ban/jail.d/ssh.conf
- systemctl restart fail2ban
# Nginx setup
- systemctl enable nginx
- rm /var/www/html/*
- >
echo "I'm Nginx @ $(dig -4 TXT +short o-o.myaddr.l.google.com @ns1.google.com)
created $(date -u)" >> /var/www/html/index.html
# Plocate indexing
- systemctl enable plocate-updatedb.timer
- systemctl start plocate-updatedb.timer
- updatedb
Info
- Waits up to 20 seconds for the block device to appear.
- Mount point gets created.
- Forces the OS to detect block devices (workaround for Hetzner automount bug).
- Ensures the volume is formatted with XFS.
- fstab entry ensure that mounting is persistent across reboots.
- Mounts all configured filesystems.
Note
The partitions that were created, formatted and mounted manually in 15 Partitions And Mounting are now getting created with this script.
Display Volume Infos
Add this output to your outputs.tf
to expose the volume device path and id:
output "volume_device_name" {
value = hcloud_volume.volume01.linux_device
description = "The volumes device name"
}
output "volume_device_id" {
value = hcloud_volume.volume01.id
description = "The volumes id"
}
Note
This useful for verifying the /etc/fstab
entry and debugging.
Apply and Verify Changes
- Apply Terraform configurations:
terraform init
terraform apply
- Reboot the server
sudo reboot
- Check mounts:
df -h | grep volume01
Success
/dev/disk/by-id/scsi-0HC_Volume_XXXXXX 10G ... /volume01