sudo lvs
does not show the deleted LV/VMs. UI disk usage (top right corner) shows empty disk as well (the space got released).I deleted the LVs using the command you adviced (
sudo lvremove /dev/qubes_dom0/vm-whonix-gw-15-root.tick
etc).I removed one more large VM, now the UI says 34.4% disk usage, but
vgdisplay
still says 0 Free PE, so something else is going on.I removed a couple of other VMs but the Free PE is still 0.
$ sudo vgdisplay --- Volume group --- VG Name qubes_dom0 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 27755 VG Access read/write VG Status resizable MAX LV 0 Cur LV 82 Open LV 14 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.47 GiB PE Size 4.00 MiB Total PE 60792 Alloc PE / Size 60792 / 237.47 GiB Free PE / Size 0 / 0 VG UUID 007hBk-o2Kx-OdMy-970q-g5v7-mWxc-lMi29y
In the Qubes UI (top right corner), clicking the disk icon, it shows 46.8% disk usage (it was over 50% before I removed these VMs).
I think I actually need this. One large VM (~30GB) doesn't boot (can't start terminal or anything), but before I delete it, I want to backup some stuff, so it would be great if I could mount that volume from dom0 manually, right?
$ sudo mount /dev/qubes_dom0/vm-crowphale-private-snap crow-mount mount: wrong fs type, bad option, bad superblock on /dev/mapper/qubes_dom0-vm--crowphale--private--snap, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.
Any pointers how to do this?
I used
sudo lvremove
to delete 6 logical volumes (from already deleted VMs), but sudo vgdisplay
still says 0 Free PE. Any more ideas? I'm gonna backup my data and delete more VMs in the meantime...Interesting. That seems lame that deleting VM doesn't free up the logical space.
lvdisplay
shows many logical volumes, for instance, this one is associated with already deleted VM:--- Logical volume --- LV Path /dev/qubes_dom0/vm-whonix-gw-15-root.tick LV Name vm-whonix-gw-15-root.tick VG Name qubes_dom0 LV UUID o2ftaB-vIuC-FVoc-C6XS-1U24-2rJQ-2ZkENz LV Write Access read only LV Creation host, time dom0, 2020-06-05 13:27:13 -0400 LV Pool name pool00 LV Status available # open 0 LV Size 10.00 GiB Mapped size 23.48% Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:40
As described in Qubes forum post (https://forum.qubes-os.org/t/insufficient-free-space-when-upgrading-to-qubes-4-1/13995) I tried to remove two vms, but it didn't help.
How can I shrink one of my logical volumes?
[user@dom0 ~]$ sudo vgdisplay
--- Volume group ---
VG Name qubes_dom0
System ID
Format lvm2 Metadata Areas 1 Metadata Sequence No 27203 VG Access read/write VG Status resizable MAX LV 0 Cur LV 118 Open LV 20 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.47 GiB PE Size 4.00 MiB Total PE 60792 Alloc PE / Size 60792 / 237.47 GiB Free PE / Size 0 / 0
Format lvm2 Metadata Areas 1 Metadata Sequence No 27203 VG Access read/write VG Status resizable MAX LV 0 Cur LV 118 Open LV 20 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.47 GiB PE Size 4.00 MiB Total PE 60792 Alloc PE / Size 60792 / 237.47 GiB Free PE / Size 0 / 0
vgdisplay
output again: