There's no cost to create a task listing on Microlancer.
Because you already have this open bounty showing here, you would probably want to use the "no escrow" option for the task listing, and then clearly indicate that it is a Bounty offer only (payable to only the first solution provided, whether that solution appears here on SN or there on Microlancer).
There is also an alternative approach, one that better protects the person who has found a solution. There could be a (reasonable) concern by the tasker that since you (presumably) are a new client (employer) on Microlancer, that after they do the work to provide a valid solution, and are the first, that no payment by you would actually occur.
So when creating the task listing, instead of choosing "no escrow", you could keep the default escrow setting but then (clearly) state that the solution should be prepared by the tasker (worker) before they submit an offer.
When you accept a tasker's offer, you would then pay the LN invoice for the Bounty amount (which gets held in escrow by Microlancer). After that, the tasker would reveal their solution to you (e.g., in the notes, or as an attachment). If that solution is acceptable to you, then you release the escrow to the tasker.
So that's another approach to offering a bounty on Microlancer.
Run vgdisplay on dom0, I'll bet you dont have any free extents. Your upgrade is trying to take a LVM snapshot but there aren't any available extents (multi-megabyte LVM blocks) in your volume group to allow COW on the running domU LV. You essentially need to shrink (or delete) one of your logical volumes (lvresize) (shrink the contained filesystem before you shrink the LV) to free up some extents in your VG to allow LVM to do a COW snapshot during the hot upgrade.
[user@dom0 ~]$ sudo vgdisplay
--- Volume group ---
VG Name qubes_dom0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 27203
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 118
Open LV 20
Max PV 0
Cur PV 1
Act PV 1
VG Size 237.47 GiB
PE Size 4.00 MiB
Total PE 60792
Alloc PE / Size 60792 / 237.47 GiB
Free PE / Size 0 / 0
Is what I'm talking about. You need that to be at least 40GB.
When you're deleting the VMs, you're leaving the LVs sitting there sucking up your logical space. VG is comprised of PVs, and LVs are taken from VGs. Let me know what lvdisplay says and tell me which of them should have been deleted and I can help you delete them.
Interesting. That seems lame that deleting VM doesn't free up the logical space.
lvdisplay shows many logical volumes, for instance, this one is associated with already deleted VM:
--- Logical volume ---
LV Path /dev/qubes_dom0/vm-whonix-gw-15-root.tick
LV Name vm-whonix-gw-15-root.tick
VG Name qubes_dom0
LV UUID o2ftaB-vIuC-FVoc-C6XS-1U24-2rJQ-2ZkENz
LV Write Access read only
LV Creation host, time dom0, 2020-06-05 13:27:13 -0400
LV Pool name pool00
LV Status available
# open 0
LV Size 10.00 GiB
Mapped size 23.48%
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:40
Assuming that is the volume you want to remove, and the partition isn't mounted or being used by a domU, execute this command for each one that doesent belong.
I removed a couple of other VMs but the Free PE is still 0.
$ sudo vgdisplay
--- Volume group ---
VG Name qubes_dom0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 27755
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 82
Open LV 14
Max PV 0
Cur PV 1
Act PV 1
VG Size 237.47 GiB
PE Size 4.00 MiB
Total PE 60792
Alloc PE / Size 60792 / 237.47 GiB
Free PE / Size 0 / 0
VG UUID 007hBk-o2Kx-OdMy-970q-g5v7-mWxc-lMi29y
In the Qubes UI (top right corner), clicking the disk icon, it shows 46.8% disk usage (it was over 50% before I removed these VMs).
I used sudo lvremove to delete 6 logical volumes (from already deleted VMs), but sudo vgdisplay still says 0 Free PE. Any more ideas? I'm gonna backup my data and delete more VMs in the meantime...
xen uses partitions for its domUs, just like Linux needs partitions on your drive. When you delete a domU, you are only 'killing' the machine. LVM let's you dynamically create and delete arbitrary partitions without needing to rearrange your disk partition table. When you run the domU creation script, it creates a partition (aka volume) for you based on how large you told it to make the VM drive. When you "delete" a domU you aren't really deleting anything, only killing the process. The volume stays there, and you could mount it manually from dom0 in case there's something on there you need.
I think I actually need this. One large VM (~30GB) doesn't boot (can't start terminal or anything), but before I delete it, I want to backup some stuff, so it would be great if I could mount that volume from dom0 manually, right?
$ sudo mount /dev/qubes_dom0/vm-crowphale-private-snap crow-mount
mount: wrong fs type, bad option, bad superblock on /dev/mapper/qubes_dom0-vm--crowphale--private--snap,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
vgdisplay
on dom0, I'll bet you dont have any free extents. Your upgrade is trying to take a LVM snapshot but there aren't any available extents (multi-megabyte LVM blocks) in your volume group to allow COW on the running domU LV. You essentially need to shrink (or delete) one of your logical volumes (lvresize
) (shrink the contained filesystem before you shrink the LV) to free up some extents in your VG to allow LVM to do a COW snapshot during the hot upgrade.Format lvm2 Metadata Areas 1 Metadata Sequence No 27203 VG Access read/write VG Status resizable MAX LV 0 Cur LV 118 Open LV 20 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.47 GiB PE Size 4.00 MiB Total PE 60792 Alloc PE / Size 60792 / 237.47 GiB Free PE / Size 0 / 0
lvdisplay
says and tell me which of them should have been deleted and I can help you delete them.lvdisplay
shows many logical volumes, for instance, this one is associated with already deleted VM:lvremove /dev/qubes_dom0/vm-whonix-gw-15-root.tick
sudo lvremove
command; volumes from already deleted VMs/domains). However,sudo vgdisplay
still shows 0 Free PE.sudo lvremove
to delete 6 logical volumes (from already deleted VMs), butsudo vgdisplay
still says 0 Free PE. Any more ideas? I'm gonna backup my data and delete more VMs in the meantime...vgdisplay
still says 0 Free PE, so something else is going on.