I hate repeated manual labor. So I set out to use my experience gained with cloud-init in experimenting with Harvester to automate the setup of Virtual Machines with the help of https://blog.robertorosario.com/setting-up-a-vm-on-truenas-scale-using-cloud-init/.
A few notable things to mention:
- I created a child dataset in each of our datasets ([f]ast=SDD for root filesystem, [b]igdata=HDD for storage) specifically for VMs, this makes administration much easier
- I went with debian, which has a slim genericcloud image, which worked on Harvester but it does not work here because the cloud-init config is mounted as CD-ROM - so after some tinkering I found the generic image to work well
- I added the password, which is now unfortunately mandatory
The section "BOTH" contains a few variable definitions that should be provided before every of the other commands. The "LOCAL" section exists because the cloud-image-utils
package is needed and TrueNAS Scale does not allow you to easily install packages. It will presume you provided a suitable cloud-init config in <VM-Name>-seed.qcow.yaml
and then convert and copy it over to the TrueNAS Scale server (which I shorthanded as nas
in my ssh config). All other variables should be self-explanatory. I did not extract the size of the disks into variables because we usually just go with lavish defaults due to our setup having plenty of space (>100TB).
Here are the commands in a compacted fashion, for easy copying out:
# LOCAL ${EDITOR:-nano} $SEEDFILE.yaml cloud-localds --verbose $SEEDFILE $SEEDFILE.yaml scp $SEEDFILE nas:${IMAGE_PATH} # BOTH VM_NAME=nostr IMAGE_PATH=/mnt/b/media/iso/servers/ SEEDFILE=${VM_NAME}-seed.qcow2 # REMOTE VM_PATH=eagle/vm/${VM_NAME} # do not use genericcloud here as it is missing CDROM drivers VM_IMAGE=http://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2 # extra data volume, comment out to omit VM_DATA=b/vm/${VM_NAME}-data VM_MEMORY=$(expr 8 \* 1024) VM_PASSWORD=password sudo zfs create -V 40G "${VM_PATH}" test -n "${VM_DATA}" && sudo zfs create -V 150G "${VM_DATA}" cd "${IMAGE_PATH}" test -e "$(basename ${VM_IMAGE})" || wget "${VM_IMAGE}" case "${VM_IMAGE}" in (*.raw) sudo dd if=$(basename ${VM_IMAGE}) of=/dev/zvol/${VM_PATH} status=progress bs=1M;; (*) sudo qemu-img convert -O raw $(basename ${VM_IMAGE}) /dev/zvol/${VM_PATH};; esac # Create the VM RESULT=`midclt call vm.create '{"name": "'${VM_NAME}'", "cpu_mode": "HOST-MODEL", "bootloader": "UEFI_CSM", "cores": 2, "threads": 2, "memory": '${VM_MEMORY}'}'` VM_ID=`echo ${RESULT} | jq '.id'` # Add the display midclt call vm.device.create '{"vm": "'${VM_ID}'", "dtype": "DISPLAY", "order": 1004, "attributes": {"web": true, "type": "VNC", "bind": "0.0.0.0", "password": "'${VM_PASSWORD}'", "wait": false}}' # Obtain a random MAC address MAC_ADDRESS=`midclt call vm.random_mac` # Add the NIC midclt call vm.device.create '{"vm": "'${VM_ID}'", "dtype": "NIC", "order": 1010, "attributes": {"type": "VIRTIO", "nic_attach": "br0", "mac": "'${MAC_ADDRESS}'"}}' # Add the root disk midclt call vm.device.create '{"vm": "'${VM_ID}'", "dtype": "DISK", "order": 1001, "attributes": {"path": "/dev/zvol/'${VM_PATH}'","type": "VIRTIO"}}' # Add a data disk midclt call vm.device.create '{"vm": "'${VM_ID}'", "dtype": "DISK", "order": 1002, "attributes": {"path": "/dev/zvol/'${VM_DATA}'","type": "VIRTIO"}}' # Add the CDROM midclt call vm.device.create '{"vm": "'${VM_ID}'", "dtype": "CDROM", "order": 1005, "attributes": {"path":"'${IMAGE_PATH}${SEEDFILE}'"}}'