I recently moved 40GB from my luks encrypted root logical volume to my home logical volume and learned a few new things about LUKS, LVM, and how Linux boots.

I was frequently running out of space on my home volume, but my root volume always had about 60GB (40%) free. I wanted to shift some of the free space from my root volume to my home volume.

The following is not a tutorial. Don’t follow these steps blindly. If you need to use these tools, read the man pages and do your own research before applying them to your own system.

Result

Below, find my storage configuration after the operation. Before, I had less than 1GB free on my home partition. After, I had 38GB free, while retaining 20GB on the root partition. Structurally, the layout is the same as before. Only the volume capacities changed.

➜  ~ df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vgxubuntu-root   79G   53G   22G  72% /
/dev/nvme0n1p2              704M  309M  344M  48% /boot
/dev/nvme0n1p1              511M   24M  488M   5% /boot/efi
/dev/mapper/vgxubuntu-home  153G  109G   38G  75% /home
➜  ~ lsblk
NAME                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
zram0                                         252:0    0     8G  0 disk  [SWAP]
nvme0n1                                       259:0    0 238.5G  0 disk  
├─nvme0n1p1                                   259:1    0   512M  0 part  /boot/efi
├─nvme0n1p2                                   259:2    0   732M  0 part  /boot
└─nvme0n1p3                                   259:3    0 237.3G  0 part  
  └─luks-d769b455-6d30-44fd-9c39-d45e57e0c1d2 253:0    0 237.2G  0 crypt 
    ├─vgxubuntu-root                          253:1    0    80G  0 lvm   /
    ├─vgxubuntu-swap_1                        253:2    0   976M  0 lvm   
    └─vgxubuntu-home                          253:3    0   156G  0 lvm   /home

Process

Unmounting the root volume without live media

Single User Mode

The volume that had excess capacity was vgxubuntu-root, which mounts to /. To be able to shrink this volume, I needed to unmount it. The tools that resize these filesystems require them to be offline to be able to shrink them.

With the root volume mounted to /, its hard to unmount while keeping your system running. Normally, you would use some kind of bootable live media to obtain a running system where the target volume was not the root of the virtual filesystem. This would commonly a USB flash drive, which I did not initially have.

I decided to try to do this without live media. First, I rebooted into [[single user mode]]. I did this by rebooting my system into the [[GRUB]] boot menu. I then edited the applicable boot entry and the single parameter.

This produced a familiar root shell from which I could attempt to resize the volumes.

Curiously, the [[mount]] command reported that / was still mounted in read-write mode. I wasn’t sure whether single user mode would mount read-only or read-write. At least on my system, the default seems to be read-write.

I also wasn’t sure whether a read-only mount was shrinkable, so I wanted to try reducing the volume size from a read-only mount first. This is because I needed commands from the filesystem to shrink the volume.

I attempted mount -o remount,ro /. This failed because the mount point was busy. I no longer have logs and can’t quite recall, but I used [[lsof]] to check for open files related to the root volume and didn’t find any. I suspect I may have [[grep]]ped incorrectly.

init=/bin/sh

At this point, I decided to try another approach. After consulting an LLM, I was told that I could set the [[init kernel parameter]] instead of booting into single user mode. I’d heard of this before, when I did my [[LPIC]] certification, but I’d never used it.

To my understanding, this swapped [[systemd]] out for an [[sh]] console.

After some $PATH troubles, I found the tools I needed. Most were in /sbin, where I expected them to be.

At this point, I took a detour. See the [Calculating new volume sizes] section. After my detour, I had calculated the sizes I needed and resumed my attempt to shrink the root volume.

Having seen that mounting the root volume in read only mode was not sufficient to resize, I decided to try unmounting it to see what would happen. [[umount]] ran with no output. echo $? returned zero. It would seem that it had unmounted the root volume. Running mount, however, showed that it was still mounted.

To try and force the umount, I ran umount -l /. This worked, but I could immediately tell that my session was no longer usable. I could no longer see the [[/dev/mapper]] devices, for example.

At this point I decided to reboot and get back to a usable console. Running exit resulted in my first ever kernel panic. I was almost proud; certainly excited.

rd.break

After some more research and a hard reboot, I decided to try my next approach. Instead of setting init=/bin/sh, I set [[rd.break]].

My understanding is that this provided a shell within the initramfs environment before even handing over to the init process. It then mounts your usual root filesystem to /[[sysroot]]

Within this environment, I tried a few approaches. None of which worked. Nevertheless, they were insightful.

The problem was that the tools I needed to resize the filesystem and its containing volume were not present in the ramdisk that I had a shell within. They were present in /sysroot, but I would have to unmount that to be able to action the resize. I thought about copying those files to my local ramdisk and then unmounting sysroot. I knew that they were probably dynamically linked, though, and wasn’t sure that the dependencies would be present in my ramdisk either. Due to this uncertainty and the fact that I had already been busy for some time, I did not attempt the approach.

Using live media

When I was unable to make the [[rd.break]] approach work, I decided to resort to live media. I found an external USB SSD and I had an ubuntu ISO on my system, so I rebooted into the multi-user graphical target and used [[dd]] to write the ISO to my hard disk.

I was able to boot into my external ubuntu environment with no trouble.

Having calculated the appropriate sizes for the volumes previously, I was now ready to proceed with the actual resize operation.

Calculating new volume sizes

In order to shrink a logical volume, I needed to shrink the filesystem that it contained. In order to do this, I needed to determine its minimum viable size. Luckily, [[resize2fs]] provides a flag that estimates the minimum possible size.

This produced a size in blocks:

➜  ~ resize2fs -P /dev/mapper/vgxubuntu-root
resize2fs 1.47.0 (5-Feb-2023)
Estimated minimum size of the filesystem: 14402132

Checking my filesystem using [[dumpe2fs]], I determined that each block was 4KiB:

➜  ~ sudo dumpe2fs /dev/mapper/vgxubuntu-root | grep -i "block size"
dumpe2fs 1.47.0 (5-Feb-2023)
Block size:               4096

Based on a quick calculation $$\frac{14402132 \times 4096}{1024^3} = 55\text{GiB} $$ I knew that I could theoretically shrink my root filesystem to 55GiB. Adding a buffer of 20GiB and rounding up, I decided to shrink my root volume to 80GiB.

I then determined the size of each [[logical extent]] by running the following:

➜  ~ sudo lvdisplay /dev/vgxubuntu/root
  --- Logical volume ---
  LV Path                /dev/vgxubuntu/root
  LV Name                root
  VG Name                vgxubuntu
  LV UUID                HeTLxq-ZLMI-3v0B-Pgxx-FATx-jN1C-BbLOKG
  LV Write Access        read/write
  LV Creation host, time xubuntu, 2022-03-27 16:19:58 +0200
  LV Status              available
  # open                 1
  LV Size                80.00 GiB
  Current LE             20480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

This output is from afterwards, so the numbers were different at the time. Nevertheless, I divided LV Size by Current LE to determine that each logical extent was equal to 4MiB.

I did this because I was under the impression that I would need to provide the number of logical extents as a parameter to lvreduce, but I was wrong.

Resizing the volumes and file-systems

As shown above, my root, home and swap logical volumes are housed within a [[luks]] encrypted volume. My external environment did not know about these volumes, so I ran cryptsetup luksOpen /dev/nvme0n1p3. After running [[cryptsetup]], the root and home logical volumes became visible.

Before this, I had relied on my operating system encrypt my drives, so it was fun so get some small insight into how to work with luks encrypted volumes.

With the drives now available, I attempted to shrink the root [[ext4]] filesystem using [[resize2fs]], but was told to run [[e2fsk]] first. I ran e2fsk without any issues and reran resize2fs. After this, I ran [[lvreduce]] to shrink the logical volume that contains the filesystem down to match the filesystem:

e2fsck /dev/mapper/vgxubuntu-root
resize2fs 80G /dev/mapper/vgxubuntu-root
lvreduce -L 80G /dev/mapper/vgxubuntu-root

This worked. The logical volume reported its new size as 80G, e2fsck ran successfully, and I was able to mount the filesystem and browse its contents.

Once this was done, I ran lvextend -L +10G /dev/mapper/vgxubuntu-root repeatedly until it reported insufficient capacity for further extensions. I was able to run the command four times, as expected. On the fifth run, the command reported that there were only 74 physical extents remaining. vgxubuntu-home now reported its size as 156G. Its previous size was roughly 115G.

Finally, I ran resize2fs /dev/mapper/vgxubuntu-root. The filesystem then grew to take up all available space in the volume.

After rebooting, the result was a home partition large enough for the foreseeable future.