Proxmox - Migrate Installation from LVM to ZFS

This article will detail how to migrate a LVM proxmox installation on single disk to a mirror ZFS pool.

1) Backup the Proxmox Host Disk

Boot from a Ubuntu/linux live CD and execute ddrescue :

livecd #~ ddrescue -f -n -r3 /dev/current_proxmox_disk /dev/new_disk /root/log.file

/dev/sdd ( Current Proxmox )
/dev/sde ( New Disk )

Launch gparted and it should pick up that the disk has less space than the total. Accept the prompt, select your disk and resize the partition to the max.

Remove the old disk and boot from the new disk.

We do not need to resize the partitions on the cloned disk because the ZFS pool will reflect the size of new disk.

2) Partitioning the New Disk

We will partition the new disk with two new partitions.

Install parted if not already installed.

apt install parted

And create the partitions as below.

parted -s /dev/sde mktable gpt
Create a new GPT partition table.
parted -s /dev/sde mkpart extended 34s 2047s
Create a new BOOT partition
parted -s /dev/sde mkpart extended 2048s 100%
Create a new partition with the space left.
parted -s /dev/sde set 1 bios_grub on
Set partition 1 as a GRUB BIOS partition.

Our partition table will looks like.

root@hv1:~/storage/megacli# fdisk -l /dev/sde
Disk /dev/sde: 136.2 GiB, 146263769088 bytes, 285671424 sectors
Disk model: MR9271-4i       
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 02380911-B185-4FC8-9481-E14B2EF20DE4

Device     Start       End   Sectors   Size Type
/dev/sde1     34      2047      2014  1007K BIOS boot
/dev/sde2   2048 285669375 285667328 136.2G Linux filesystem

3) Creating the ZFS Pool

We will create a ZFS pool on the new disk using the recently created partition 2.

zpool create -f rpool /dev/sde2
root@hv1:~/storage/megacli# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          sde2      ONLINE       0     0     0

4) Creating the Proxmox ZFS Datasets

We need to create the datasets that proxmox needs to run.

zfs create rpool/ROOT
zfs create rpool/ROOT/pve-1
zfs create rpool/data
zfs create rpool/swap -V 8G

And we should see the below output.

root@hv1:~/storage/megacli# zfs list
NAME               USED  AVAIL     REFER  MOUNTPOINT
rpool             8.25G   123G       25K  /rpool
rpool/ROOT          48K   123G       24K  /rpool/ROOT
rpool/ROOT/pve-1    24K   123G       24K  /rpool/ROOT/pve-1
rpool/data          24K   123G       24K  /rpool/data
rpool/swap        8.25G   132G       12K  -

5) Setting the SWAP Partition

The comand below will point our swap to the ZFS pool.

mkswap /dev/zvol/rpool/swap
Setting up swapspace version 1, size = 8 GiB (8589930496 bytes)
no label, UUID=72598106-e0a3-479a-9baa-43b189b1b434

6) Syncing Proxmox Data with the ZFS Pool

It's time to sync our files from the old disk to the ZFS datasets.

cd /rpool/ROOT/pve-1
rsync -avx / ./
Copying all files to the ZFS pool.
mount --bind /proc proc
mount --bind /dev dev
mount --bind /sys sys
Mounting systems folders.
swapoff -a
All devices marked as swap in /etc/fstab are made avaiable.

7) Updating /etc/fstab

We need to update our /etc/fstab to point the swap partition to our ZFS pool and remove the entry mounting our old disk file system to/.

chroot /rpool/ROOT/pve-1
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
Current content.
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/zvol/rpool/swap none swap sw 0 0
proc /proc proc defaults 0 0
Modified version.

8) Updating GRUB

We also need to update grub to point to our ZFS pool editing the file /etc/default/grub.

...
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
GRUB_CMDLINE_LINUX=""
...
Current content.
...
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
...
Updated content.

Let's update our ZFS pool boot partition.

zpool set bootfs=rpool/ROOT/pve-1 rpool

It's time to update grub on both disks after the recent changes.

grub-install /dev/sdd
Current Proxmox disk.
Installing for i386-pc platform.
Installation finished. No error reported.
grub-install /dev/sde
New disk.
Installing for i386-pc platform.
Installation finished. No error reported.

After installing GRUB we need to update it.

grub-update
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.119-1-pve
Found initrd image: /boot/initrd.img-5.4.119-1-pve
Found linux image: /boot/vmlinuz-5.3.18-3-pve
Found initrd image: /boot/initrd.img-5.3.18-3-pve
Found linux image: /boot/vmlinuz-5.3.10-1-pve
Found initrd image: /boot/initrd.img-5.3.10-1-pve
Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin
Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin
done

We also need to instruct our ZFS pool to mount at /.

zfs set mountpoint=/ rpool/ROOT/pve-1

9) Rebooting into the ZFS Pool

With all the changes above we are now ready to reboot and have the proxmox running from the ZFS pool.

Go ahead exit chroot and reboot your system.

exit
reboot

10) Adding a Disk to the ZFS Pool

After the reboot we will be running proxmox from the ZFS pool. However, the pool just have the new disk and we want it as a mirrored pool.

First, we will disable the old LVM volume.

lvchange -an pve

Second, copy the partition table of the new disk to our current proxmox disk (/dev/sdd).

sgdisk /dev/sde -R /dev/sdd

Third, after cloning we need to randomize the disk GUID.

sgdisk -G /dev/sdd

Finally, we will add the current proxmox (/dev/sdd) disk to our ZFS pool that only has the new disk (/dev/sde).

zpool attach -f rpool /dev/sde2 /dev/sdd2
Make sure to wait until resilver is done before rebooting.

We can now check if the disk has been added to our pool with the command below.

root@hv1:~# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Tue Nov 30 14:56:23 2021
        11.1G scanned at 207M/s, 2.74G issued at 51.0M/s, 11.1G total
        2.75G resilvered, 24.66% done, 0 days 00:02:48 to go
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sde2    ONLINE       0     0     0
            sdd2    ONLINE       0     0     0  (resilvering)

errors: No known data errors

11) Proxmox GUI Update

We now need a few changes on the proxmox storage configuration files to reflect our changes.

Update /etc/pve/storage.cfg as follows :

dir: local
        path /var/lib/vz
        content backup,vztmpl,iso,snippets

# Original Storage entry when proxmox was running on a single disk as LVM
#lvmthin: local-lvm
#       thinpool data
#       vgname pve
#       content rootdir,images
#

# Entry added manually for the ZFS local storage after the migration
zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir

lvmthin: HV2-FW2
        thinpool HV2-FW2
        vgname HV2-FW2
        content images,rootdir
        nodes hv2

Conclusion

Be careful when issuing the commands since we are dealing with live disks a sinlge mistake can make your current installation broken and in the worst case scenario all data can be lost.

After, the steps above we have migrated our proxmox installation from LVM to a mirrored ZFS pool.

pool: rpool
 state: ONLINE
  scan: resilvered 11.1G in 0 days 00:03:07 with 0 errors on Tue Nov 30 14:59:30 2021
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sde2    ONLINE       0     0     0
            sdd2    ONLINE       0     0     0

errors: No known data errors

Resources

How to resize a logical volume with 5 simple LVM commands
It’s easy to add capacity to logical volumes with a few simple commands.
Logical Volume Manager (LVM) - Proxmox VE
How to Extend/Reduce LVM’s (Logical Volume Management) in Linux - Part II
In this article, we are going to see how to extend volume group, extend and reduce a logical volume in Logical volume management (LVM) also called as flexible volume file-system.
[PVE-User] Migrating from LVM to ZFS
ZFS on Linux - Proxmox VE
ZFS: Tips and Tricks - Proxmox VE
Resize disks - Proxmox VE
parted(8): partition change program - Linux man page
parted is a disk partitioning and partition resizing program. It allows you to create, destroy, resize, move and copy ext2, linux-swap, FAT, FAT32, and ...