[gentoo-user] LVM and the /usr Logical Volume
From
dhk@21:1/5 to
All on Wed Apr 6 02:30:01 2022
This is a multi-part message in MIME format.
My new laptop is set up to dual boot and has a clean Gentoo install as
the second operating system. It looks like there may be an issue with
the /usr Logical Volume (LV) somewhere between LVM, initramfs and udev.
Only the base system has been installed and updated (no desktop).
The issue is the /usr logical volume is not mounted as expected. After
booting without the livecd:
* The df -h command show /usr on /dev/dm-1 and not
/dev/mapper/vg0-usr like the in the fstab.
* My expectation is it should follow the other LVs (home, var, opt,
vm) and be in the vg0 Volume Group on /dev/mapper .
* However the mount /usr command indicates that it is mounted
correctly: mount: /usr: /dev/mapper/vg0-usr already mounted or mount
point busy.
Is there something off here or is this correct behavior?
The laptop is a new HP Envy x360, 2-in-1 Flip Laptop, 15.6" Full HD Touchscreen, AMD Ryzen 7 5700U Processor, 64GB RAM and 1TB PCIe SSD.
Below is the /etc/fstab and output from lsblk, df -h and the links in
the volume group after booting to the livecd and booting to the ssd.
Thank you
#
*****************************************************************************
# /etc/fstab: This is a dual boot system (Windows 11 & Gentoo), the
# same results occurred using straight mount points, LABEL and UUID.
#
*****************************************************************************
# <fs> <mountpoint> <type> <opts>
<dump/pass>
#/dev/nvme0n1p1 /efi vfat noauto,noatime 1 2
#/dev/nvme0n1p2 /
#/dev/nvme0n1p3 /Win11
#/dev/nvme0n1p4 /Win11Data
#/dev/nvme0n1p5 /Win11Recovery
/dev/nvme0n1p6 /boot ext2 defaults,noatime 0 2
/dev/nvme0n1p7 none swap sw 0 0
/dev/nvme0n1p8 / ext4 defaults,noatime,discard 0 1
/dev/nvme0n1p9 /lib/modules ext4 defaults,noatime,discard 0 1
/dev/nvme0n1p10 /tmp ext4 defaults,noatime,discard 0 2
#/dev/mapper/vg0-usr /usr ext4 defaults,noatime,discard 0 0
#/dev/mapper/vg0-home /home ext4 defaults,noatime,discard 0 1
#/dev/mapper/vg0-opt /opt ext4 defaults,noatime,discard 0 1
#/dev/mapper/vg0-var /var ext4 defaults,noatime,discard 0 1
#/dev/mapper/vg1-vm /vm ext4 noauto,noatime,discard,user 0 1
#Use blkid /dev/mapper/* to get the LABEL and UUID (quotes cause errors). LABEL=usr /usr ext4 defaults,noatime,discard 0 0
LABEL=home /home ext4 defaults,noatime,discard 0 1
LABEL=opt /opt ext4 defaults,noatime,discard 0 1
LABEL=var /var ext4 defaults,noatime,discard 0 1
LABEL=vm /vm ext4 noauto,noatime,discard,user 0 1
#UUID=d9237094-6589-4e90-989d-17bfe74082a4 /usr ext4 defaults,noatime,discard 0 0 #UUID=53831f3e-6266-4186-a7e1-90ecd027b981 /home ext4 defaults,noatime,discard 0 1 #UUID=cbdfcbb5-dff1-4b21-8eca-d1684b621fb2 /opt ext4 defaults,noatime,discard 0 1 #UUID=d43c8c7a-1a83-42f7-958d-9402e7bcc48f /var ext4 defaults,noatime,discard 0 1 #UUID=95ea1fcc-df9d-4c0b-bce4-a979f8430728 /vm ext4 noauto,noatime,discard,user 0 1
/dev/cdrom /mnt/cdrom auto rw,exec,noauto,user 0 0
#
*****************************************************************************
# Booting to the livecd and before chroot, all looks good.
#
***************************************************************************** livecd ~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 385.7M 1 loop /mnt/livecd sda 8:0 1 2G 0 disk └─sda1 8:1 1 2G 0 part /mnt/cdrom nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 100M 0 part
├─nvme0n1p2 259:2 0 16M 0 part
├─nvme0n1p3 259:3 0 52.2G 0 part
├─nvme0n1p4 259:4 0 40.2G 0 part
├─nvme0n1p5 259:5 0 608.6M 0 part
├─nvme0n1p6 259:6 0 2.8G 0 part /mnt/gentoo/boot ├─nvme0n1p7 259:7 0 4.7G 0 part [SWAP]
├─nvme0n1p8 259:8 0 9.3G 0 part /mnt/gentoo
├─nvme0n1p9 259:9 0 3.7G 0 part /mnt/gentoo/lib/modules ├─nvme0n1p10 259:10 0 2.8G 0 part /mnt/gentoo/tmp ├─nvme0n1p11 259:11 0 186.3G 0 part
│ ├─vg0-usr 253:1 0 25G 0 lvm /mnt/gentoo/usr
│ ├─vg0-var 253:2 0 20G 0 lvm /mnt/gentoo/var
│ ├─vg0-home 253:3 0 80G 0 lvm /mnt/gentoo/home
│ └─vg0-opt 253:4 0 20G 0 lvm /mnt/gentoo/opt ├─nvme0n1p12 259:12 0 186.3G 0 part
│ └─vg1-vm 253:0 0 150G 0 lvm /mnt/gentoo/vm ├─nvme0n1p13 259:13 0 93.1G 0 part
├─nvme0n1p14 259:14 0 93.1G 0 part
├─nvme0n1p15 259:15 0 46.6G 0 part
├─nvme0n1p16 259:16 0 46.6G 0 part
├─nvme0n1p17 259:17 0 46.6G 0 part
├─nvme0n1p18 259:18 0 46.6G 0 part
├─nvme0n1p19 259:19 0 46.6G 0 part
└─nvme0n1p20 259:20 0 23.5G 0 part
livecd ~ # df -h
Filesystem Size Used Avail Use% Mounted on none 32G 704K 32G 1% /run udev 10M 0 10M 0% /dev shm 32G 0 32G 0% /dev/shm tmpfs 32G 60M 32G 1% / /dev/sda1 2.0G 436M 1.6G 22% /mnt/cdrom /dev/loop0 386M 386M 0 100% /mnt/livecd cgroup_root 10M 0 10M 0% /sys/fs/cgroup /dev/nvme0n1p8 9.1G 915M 7.7G 11% /mnt/gentoo /dev/nvme0n1p6 2.8G 105M 2.6G 4% /mnt/gentoo/boot /dev/nvme0n1p9 3.6G 112M 3.3G 4% /mnt/gentoo/lib/modules /dev/nvme0n1p10 2.7G 32K 2.6G 1% /mnt/gentoo/tmp /dev/mapper/vg0-usr 25G 3.7G 20G 16% /mnt/gentoo/usr /dev/mapper/vg0-var 20G 2.4G 17G 13% /mnt/gentoo/var /dev/mapper/vg0-home 79G 24K 75G 1% /mnt/gentoo/home /dev/mapper/vg0-opt 20G 14M 19G 1% /mnt/gentoo/opt /dev/mapper/vg1-vm 147G 28K 140G 1% /mnt/gentoo/vm tmpfs 32G 0 32G 0% /mnt/gentoo/dev/shm
#
*****************************************************************************
# Booting to the livecd and after chroot, all looks good.
#
***************************************************************************** (chroot) livecd # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 385.7M 1 loop
sda 8:0 1 2G 0 disk └─sda1 8:1 1 2G 0 part
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 100M 0 part
├─nvme0n1p2 259:2 0 16M 0 part
├─nvme0n1p3 259:3 0 52.2G 0 part
├─nvme0n1p4 259:4 0 40.2G 0 part
├─nvme0n1p5 259:5 0 608.6M 0 part
├─nvme0n1p6 259:6 0 2.8G 0 part /boot
├─nvme0n1p7 259:7 0 4.7G 0 part [SWAP]
├─nvme0n1p8 259:8 0 9.3G 0 part /
├─nvme0n1p9 259:9 0 3.7G 0 part /lib/modules
├─nvme0n1p10 259:10 0 2.8G 0 part /tmp
├─nvme0n1p11 259:11 0 186.3G 0 part
│ ├─vg0-usr 253:1 0 25G 0 lvm /usr
│ ├─vg0-var 253:2 0 20G 0 lvm /var
│ ├─vg0-home 253:3 0 80G 0 lvm /home
│ └─vg0-opt 253:4 0 20G 0 lvm /opt
├─nvme0n1p12 259:12 0 186.3G 0 part
│ └─vg1-vm 253:0 0 150G 0 lvm /vm
├─nvme0n1p13 259:13 0 93.1G 0 part
├─nvme0n1p14 259:14 0 93.1G 0 part
├─nvme0n1p15 259:15 0 46.6G 0 part
├─nvme0n1p16 259:16 0 46.6G 0 part
├─nvme0n1p17 259:17 0 46.6G 0 part
├─nvme0n1p18 259:18 0 46.6G 0 part
├─nvme0n1p19 259:19 0 46.6G 0 part
└─nvme0n1p20 259:20 0 23.5G 0 part
(chroot) livecd # df -h
Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p8 9.1G 915M 7.7G 11% / /dev/nvme0n1p6 2.8G 105M 2.6G 4% /boot /dev/nvme0n1p9 3.6G 112M 3.3G 4% /lib/modules /dev/nvme0n1p10 2.7G 32K 2.6G 1% /tmp /dev/mapper/vg0-usr 25G 3.7G 20G 16% /usr /dev/mapper/vg0-var 20G 2.4G 17G 13% /var /dev/mapper/vg0-home 79G 24K 75G 1% /home /dev/mapper/vg0-opt 20G 14M 19G 1% /opt /dev/mapper/vg1-vm 147G 28K 140G 1% /vm cgroup_root 10M 0 10M 0% /sys/fs/cgroup udev 10M 0 10M 0% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 704K 32G 1% /run
#
*****************************************************************************
# Booting to new system, the df -h does not shows /usr in
# the vg0 volume group under /dev/mapper.
#
***************************************************************************** newhost / # df -h
Filesystem Size Used Avail Use% Mounted on none 32G 604K 32G 1% /run udev 10M 0 10M 0% /dev tmpfs 32G 0 32G 0% /dev/shm /dev/nvme0n1p8 9.1G 916M 7.7G 11% / */dev/dm-1 25G 3.9G 20G 17% /usr ** # This looks
wrong,**the expectation is that it would be /dev/mapper/vg0-usr .** *cgroup_root 10M 0 10M 0% /sys/fs/cgroup /dev/nvme0n1p6 2.8G 105M 2.6G 4% /boot /dev/nvme0n1p9 3.6G 112M 3.3G 4% /lib/modules /dev/nvme0n1p10 2.7G 32K 2.6G 1% /tmp /dev/mapper/vg0-home 79G 24K 75G 1% /home /dev/mapper/vg0-opt 20G 7.3M 19G 1% /opt /dev/mapper/vg0-var 20G 2.8G 16G 15% /var
newhost / # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 100M 0 part
├─nvme0n1p2 259:2 0 16M 0 part
├─nvme0n1p3 259:3 0 52.2G 0 part
├─nvme0n1p4 259:4 0 40.2G 0 part
├─nvme0n1p5 259:5 0 608.6M 0 part
├─nvme0n1p6 259:6 0 2.8G 0 part /boot
├─nvme0n1p7 259:7 0 4.7G 0 part [SWAP]
├─nvme0n1p8 259:8 0 9.3G 0 part /
├─nvme0n1p9 259:9 0 3.7G 0 part /lib/modules
├─nvme0n1p10 259:10 0 2.8G 0 part /tmp
├─nvme0n1p11 259:11 0 186.3G 0 part
│ ├─*vg0-usr 253:1 0 25G 0 lvm /usr ** # This looks right.*
│ ├─vg0-var 253:2 0 20G 0 lvm /var
│ ├─vg0-home 253:3 0 80G 0 lvm /home
│ └─vg0-opt 253:4 0 20G 0 lvm /opt
├─nvme0n1p12 259:12 0 186.3G 0 part
│ └─vg1-vm 253:0 0 150G 0 lvm
├─nvme0n1p13 259:13 0 93.1G 0 part
├─nvme0n1p14 259:14 0 93.1G 0 part
├─nvme0n1p15 259:15 0 46.6G 0 part
├─nvme0n1p16 259:16 0 46.6G 0 part
├─nvme0n1p17 259:17 0 46.6G 0 part
├─nvme0n1p18 259:18 0 46.6G 0 part
├─nvme0n1p19 259:19 0 46.6G 0 part
└─nvme0n1p20 259:20 0 23.5G 0 part
newhost / # ls -l /dev/vg0 /dev/vg1
/dev/vg0:
total 0
lrwxrwxrwx 1 root root 7 Apr 4 03:32 home -> ../dm-3
lrwxrwxrwx 1 root root 7 Apr 4 03:32 opt -> ../dm-4
lrwxrwxrwx 1 root root 7 Apr 4 03:32 *usr -> ../dm-1 # This looks right.* lrwxrwxrwx 1 root root 7 Apr 4 03:32 var -> ../dm-2
/dev/vg1:
total 0
lrwxrwxrwx 1 root root 7 Apr 4 03:32 vm -> ../dm-0
# mount /usr
mount: /usr: /dev/mapper/vg0-usr already mounted or mount point busy.
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
My new laptop is set up to dual boot and has a clean Gentoo install
as the second operating system. It looks like there may be an issue
with the /usr Logical Volume (LV) somewhere between LVM, initramfs
and udev. Only the base system has been installed and updated (no
desktop).<br>
<br>
The issue is the /usr logical volume is not mounted as expected.
After booting without the livecd:<br>
* The df -h command show /usr on /dev/dm-1 and not
/dev/mapper/vg0-usr like the in the fstab.<br>
* My expectation is it should follow the other LVs (home, var,
opt, vm) and be in the vg0 Volume Group on /dev/mapper .<br>
* However the mount /usr command indicates that it is mounted
correctly: mount: /usr: /dev/mapper/vg0-usr already mounted or
mount point busy.<br>
<br>
Is there something off here or is this correct behavior?<br>
<br>
The laptop is a new HP Envy x360, 2-in-1 Flip Laptop, 15.6" Full HD
Touchscreen, AMD Ryzen 7 5700U Processor, 64GB RAM and 1TB PCIe SSD.<br>
<br>
Below is the /etc/fstab and output from lsblk, df -h and the links
in the volume group after booting to the livecd and booting to the
ssd.<br>
<br>
Thank you<br>
<br>
# *****************************************************************************<br>
# /etc/fstab: This is a dual boot system (Windows 11 & Gentoo),
the<br>
# same results occurred using straight mount points, LABEL and UUID.<br>
# *****************************************************************************<br>
# <fs> <mountpoint> <type>
<opts> <dump/pass><br>
#/dev/nvme0n1p1 /efi vfat
noauto,noatime 1 2<br>
#/dev/nvme0n1p2 /<br>
#/dev/nvme0n1p3 /Win11<br>
#/dev/nvme0n1p4 /Win11Data<br>
#/dev/nvme0n1p5 /Win11Recovery<br>
/dev/nvme0n1p6 /boot ext2
defaults,noatime 0 2<br>
/dev/nvme0n1p7 none swap
sw 0 0<br>
/dev/nvme0n1p8 / ext4
defaults,noatime,discard 0 1<br>
/dev/nvme0n1p9 /lib/modules ext4
defaults,noatime,discard 0 1<br>
/dev/nvme0n1p10 /tmp ext4
defaults,noatime,discard 0 2<br>
<br>
#/dev/mapper/vg0-usr /usr ext4
defaults,noatime,discard 0 0<br>
#/dev/mapper/vg0-home /home ext4
defaults,noatime,discard 0 1<br>
#/dev/mapper/vg0-opt /opt ext4
defaults,noatime,discard 0 1<br>
#/dev/mapper/vg0-var /var ext4
defaults,noatime,discard 0 1<br>
#/dev/mapper/vg1-vm /vm ext4
noauto,noatime,discard,user 0 1<br>
<br>
#Use blkid /dev/mapper/* to get the LABEL and UUID (quotes cause
errors).<br>
LABEL=usr /usr ext4 defaults,noatime,discard 0 0<br>
LABEL=home /home ext4 defaults,noatime,discard 0 1<br>
LABEL=opt /opt ext4 defaults,noatime,discard 0 1<br>
LABEL=var /var ext4 defaults,noatime,discard 0 1<br>
LABEL=vm /vm ext4 noauto,noatime,discard,user 0 1<br>
<br>
#UUID=d9237094-6589-4e90-989d-17bfe74082a4 /usr ext4
defaults,noatime,discard 0 0<br>
#UUID=53831f3e-6266-4186-a7e1-90ecd027b981 /home ext4
defaults,noatime,discard 0 1<br>
#UUID=cbdfcbb5-dff1-4b21-8eca-d1684b621fb2 /opt ext4
defaults,noatime,discard 0 1<br>
#UUID=d43c8c7a-1a83-42f7-958d-9402e7bcc48f /var ext4
defaults,noatime,discard 0 1<br>
#UUID=95ea1fcc-df9d-4c0b-bce4-a979f8430728 /vm ext4
noauto,noatime,discard,user 0 1<br>
<br>
/dev/cdrom /mnt/cdrom auto
rw,exec,noauto,user 0 0<br>
<br>
<br>
# *****************************************************************************<br>
# Booting to the livecd and before chroot, all looks good.<br>
# *****************************************************************************<br>
livecd ~ # lsblk <br>
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS<br>
loop0 7:0 0 385.7M 1 loop /mnt/livecd<br>
sda 8:0 1 2G 0 disk <br>
└─sda1 8:1 1 2G 0 part /mnt/cdrom<br>
nvme0n1 259:0 0 931.5G 0 disk <br>
├─nvme0n1p1 259:1 0 100M 0 part <br>
├─nvme0n1p2 259:2 0 16M 0 part <br>
├─nvme0n1p3 259:3 0 52.2G 0 part <br>
├─nvme0n1p4 259:4 0 40.2G 0 part <br>
├─nvme0n1p5 259:5 0 608.6M 0 part <br>
├─nvme0n1p6 259:6 0 2.8G 0 part /mnt/gentoo/boot<br>
├─nvme0n1p7 259:7 0 4.7G 0 part [SWAP]<br>
├─nvme0n1p8 259:8 0 9.3G 0 part /mnt/gentoo<br>
├─nvme0n1p9 259:9 0 3.7G 0 part /mnt/gentoo/lib/modules<br>
├─nvme0n1p10 259:10 0 2.8G 0 part /mnt/gentoo/tmp<br>
├─nvme0n1p11 259:11 0 186.3G 0 part <br>
│ ├─vg0-usr 253:1 0 25G 0 lvm /mnt/gentoo/usr<br>
│ ├─vg0-var 253:2 0 20G 0 lvm /mnt/gentoo/var<br>
│ ├─vg0-home 253:3 0 80G 0 lvm /mnt/gentoo/home<br>
│ └─vg0-opt 253:4 0 20G 0 lvm /mnt/gentoo/opt<br>
├─nvme0n1p12 259:12 0 186.3G 0 part <br>
│ └─vg1-vm 253:0 0 150G 0 lvm /mnt/gentoo/vm<br>
├─nvme0n1p13 259:13 0 93.1G 0 part <br>
├─nvme0n1p14 259:14 0 93.1G 0 part <br>
├─nvme0n1p15 259:15 0 46.6G 0 part <br>
├─nvme0n1p16 259:16 0 46.6G 0 part <br>
├─nvme0n1p17 259:17 0 46.6G 0 part <br>
├─nvme0n1p18 259:18 0 46.6G 0 part <br>
├─nvme0n1p19 259:19 0 46.6G 0 part <br>
└─nvme0n1p20 259:20 0 23.5G 0 part<br>
<br>
livecd ~ # df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
none 32G 704K 32G 1% /run<br>
udev 10M 0 10M 0% /dev<br>
shm 32G 0 32G 0% /dev/shm<br>
tmpfs 32G 60M 32G 1% /<br>
/dev/sda1 2.0G 436M 1.6G 22% /mnt/cdrom<br>
/dev/loop0 386M 386M 0 100% /mnt/livecd<br>
cgroup_root 10M 0 10M 0% /sys/fs/cgroup<br>
/dev/nvme0n1p8 9.1G 915M 7.7G 11% /mnt/gentoo<br>
/dev/nvme0n1p6 2.8G 105M 2.6G 4% /mnt/gentoo/boot<br>
/dev/nvme0n1p9 3.6G 112M 3.3G 4% /mnt/gentoo/lib/modules<br>
/dev/nvme0n1p10 2.7G 32K 2.6G 1% /mnt/gentoo/tmp<br>
/dev/mapper/vg0-usr 25G 3.7G 20G 16% /mnt/gentoo/usr<br>
/dev/mapper/vg0-var 20G 2.4G 17G 13% /mnt/gentoo/var<br>
/dev/mapper/vg0-home 79G 24K 75G 1% /mnt/gentoo/home<br>
/dev/mapper/vg0-opt 20G 14M 19G 1% /mnt/gentoo/opt<br>
/dev/mapper/vg1-vm 147G 28K 140G 1% /mnt/gentoo/vm<br>
tmpfs 32G 0 32G 0% /mnt/gentoo/dev/shm<br>
<br>
<br>
# *****************************************************************************<br>
# Booting to the livecd and after chroot, all looks good.<br>
# *****************************************************************************<br>
(chroot) livecd # lsblk<br>
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS<br>
loop0 7:0 0 385.7M 1 loop <br>
sda 8:0 1 2G 0 disk <br>
└─sda1 8:1 1 2G 0 part <br>
nvme0n1 259:0 0 931.5G 0 disk <br>
├─nvme0n1p1 259:1 0 100M 0 part <br>
├─nvme0n1p2 259:2 0 16M 0 part <br>
├─nvme0n1p3 259:3 0 52.2G 0 part <br>
├─nvme0n1p4 259:4 0 40.2G 0 part <br>
├─nvme0n1p5 259:5 0 608.6M 0 part <br>
├─nvme0n1p6 259:6 0 2.8G 0 part /boot<br>
├─nvme0n1p7 259:7 0 4.7G 0 part [SWAP]<br>
├─nvme0n1p8 259:8 0 9.3G 0 part /<br>
├─nvme0n1p9 259:9 0 3.7G 0 part /lib/modules<br>
├─nvme0n1p10 259:10 0 2.8G 0 part /tmp<br>
├─nvme0n1p11 259:11 0 186.3G 0 part <br>
│ ├─vg0-usr 253:1 0 25G 0 lvm /usr<br>
│ ├─vg0-var 253:2 0 20G 0 lvm /var<br>
│ ├─vg0-home 253:3 0 80G 0 lvm /home<br>
│ └─vg0-opt 253:4 0 20G 0 lvm /opt<br>
├─nvme0n1p12 259:12 0 186.3G 0 part <br>
│ └─vg1-vm 253:0 0 150G 0 lvm /vm<br>
├─nvme0n1p13 259:13 0 93.1G 0 part <br>
├─nvme0n1p14 259:14 0 93.1G 0 part <br>
├─nvme0n1p15 259:15 0 46.6G 0 part <br>
├─nvme0n1p16 259:16 0 46.6G 0 part <br>
├─nvme0n1p17 259:17 0 46.6G 0 part <br>
├─nvme0n1p18 259:18 0 46.6G 0 part <br>
├─nvme0n1p19 259:19 0 46.6G 0 part <br>
└─nvme0n1p20 259:20 0 23.5G 0 part<br>
<br>
(chroot) livecd # df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/nvme0n1p8 9.1G 915M 7.7G 11% /<br>
/dev/nvme0n1p6 2.8G 105M 2.6G 4% /boot<br>
/dev/nvme0n1p9 3.6G 112M 3.3G 4% /lib/modules<br>
/dev/nvme0n1p10 2.7G 32K 2.6G 1% /tmp<br>
/dev/mapper/vg0-usr 25G 3.7G 20G 16% /usr<br>
/dev/mapper/vg0-var 20G 2.4G 17G 13% /var<br>
/dev/mapper/vg0-home 79G 24K 75G 1% /home<br>
/dev/mapper/vg0-opt 20G 14M 19G 1% /opt<br>
/dev/mapper/vg1-vm 147G 28K 140G 1% /vm<br>
cgroup_root 10M 0 10M 0% /sys/fs/cgroup<br>
udev 10M 0 10M 0% /dev<br>
tmpfs 32G 0 32G 0% /dev/shm<br>
none 32G 704K 32G 1% /run<br>
<br>
<br>
<br>
# *****************************************************************************<br>
# Booting to new system, the df -h does not shows /usr in<br>
# the vg0 volume group under /dev/mapper.<br>
# *****************************************************************************<br>
newhost / # df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
none 32G 604K 32G 1% /run<br>
udev 10M 0 10M 0% /dev<br>
tmpfs 32G 0 32G 0% /dev/shm<br>
/dev/nvme0n1p8 9.1G 916M 7.7G 11% /<br>
<b>/dev/dm-1 25G 3.9G 20G 17% /usr </b><b> # This
looks wrong,</b><b> the expectation is that it would be
/dev/mapper/vg0-usr .</b><b><br>
</b>cgroup_root 10M 0 10M 0% /sys/fs/cgroup<br>
/dev/nvme0n1p6 2.8G 105M 2.6G 4% /boot<br>
/dev/nvme0n1p9 3.6G 112M 3.3G 4% /lib/modules<br>
/dev/nvme0n1p10 2.7G 32K 2.6G 1% /tmp<br>
/dev/mapper/vg0-home 79G 24K 75G 1% /home<br>
/dev/mapper/vg0-opt 20G 7.3M 19G 1% /opt<br>
/dev/mapper/vg0-var 20G 2.8G 16G 15% /var<br>
<br>
newhost / # lsblk <br>
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS<br>
nvme0n1 259:0 0 931.5G 0 disk <br>
├─nvme0n1p1 259:1 0 100M 0 part <br>
├─nvme0n1p2 259:2 0 16M 0 part <br>
├─nvme0n1p3 259:3 0 52.2G 0 part <br>
├─nvme0n1p4 259:4 0 40.2G 0 part <br>
├─nvme0n1p5 259:5 0 608.6M 0 part <br>
├─nvme0n1p6 259:6 0 2.8G 0 part /boot<br>
├─nvme0n1p7 259:7 0 4.7G 0 part [SWAP]<br>
├─nvme0n1p8 259:8 0 9.3G 0 part /<br>
├─nvme0n1p9 259:9 0 3.7G 0 part /lib/modules<br>
├─nvme0n1p10 259:10 0 2.8G 0 part /tmp<br>
├─nvme0n1p11 259:11 0 186.3G 0 part <br>
│ ├─<b>vg0-usr 253:1 0 25G 0 lvm /usr </b><b> # This
looks right.</b><br>
│ ├─vg0-var 253:2 0 20G 0 lvm /var<br>
│ ├─vg0-home 253:3 0 80G 0 lvm /home<br>
│ └─vg0-opt 253:4 0 20G 0 lvm /opt<br>
├─nvme0n1p12 259:12 0 186.3G 0 part <br>
│ └─vg1-vm 253:0 0 150G 0 lvm <br>
├─nvme0n1p13 259:13 0 93.1G 0 part <br>
├─nvme0n1p14 259:14 0 93.1G 0 part <br>
├─nvme0n1p15 259:15 0 46.6G 0 part <br>
├─nvme0n1p16 259:16 0 46.6G 0 part <br>
├─nvme0n1p17 259:17 0 46.6G 0 part <br>
├─nvme0n1p18 259:18 0 46.6G 0 part <br>
├─nvme0n1p19 259:19 0 46.6G 0 part <br>
└─nvme0n1p20 259:20 0 23.5G 0 part <br>
<br>
newhost / # ls -l /dev/vg0 /dev/vg1<br>
/dev/vg0:<br>
total 0<br>
lrwxrwxrwx 1 root root 7 Apr 4 03:32 home -> ../dm-3<br>
lrwxrwxrwx 1 root root 7 Apr 4 03:32 opt -> ../dm-4<br>
lrwxrwxrwx 1 root root 7 Apr 4 03:32 <b>usr -> ../dm-1 # This
looks right.</b><br>
lrwxrwxrwx 1 root root 7 Apr 4 03:32 var -> ../dm-2<br>
<br>
/dev/vg1:<br>
total 0<br>
lrwxrwxrwx 1 root root 7 Apr 4 03:32 vm -> ../dm-0<br>
<br>
# mount /usr<br>
mount: /usr: /dev/mapper/vg0-usr already mounted or mount point
busy.<br>
<br>
</body>
</html>
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
From
dhk@21:1/5 to
All on Mon Apr 25 15:40:01 2022
This is a multi-part message in MIME format.
Having /dev/dm-1 mounted on /usr would not be an issue if it was
supposed to be that way; however, nothing in the handbook or anything
else I have read says that is correct. In addition, every other system
I have setup or used always had /usr as the mount point in the fstab.
My primary questions are:
* Why is it different this time?
* What changed to make /usr mount the block device?
* Why is the /usr record in the fstab being ignored and being handled differently that /var, /opt, /home and /vm ?
Even though everything seems to be work correctly, without a good and authoritative explanation my confidence level in the stability is not
too high and is preventing me from relying on it as primary host.
My concerns about not having a good explanation for why df -h shows
/dev/dm-1 on /usr instead of /dev/mapper/vg0-usr are:
* There could be problems interfacing directly with the block device (/dev/dm-1) and not the link (/dev/mapper/vg0-usr).
* When it comes time to extend the /usr logical volume and use commands
like lvextend, resize2fs, lvresize and some others it may cause problems.
* The documentation does not say this is correct, in fact the
documentation specifically says the opposite that the fstab is used for
the mount points.
* It looks like the initramfs is not letting go of the temporary /usr
mount and mounting /usr in the vg0-usr volume group correctly.
After reinstalling Gentoo with a new liveusb, my system still looks
similar to the way it was before. I started with the existing partition schema and wiped everything and performed a separate independent
install. I am still not sure why the /dev/dm-1 block device is mounted
on /usr which is not what the fstab is instructing.
UUIDs are not being used because the handbook says:
*Important:* UUIDs of the filesystem on a LVM volume and its LVM
snapshots are identical, therefore using UUIDs to mount LVM volumes
should be avoided.
/etc/fstab:
/dev/nvme0n1p6 /boot ext2 defaults,noatime 0 2 /dev/nvme0n1p7 none swap sw 0 0 /dev/nvme0n1p8 / ext4 defaults,noatime,discard 0 1 /dev/nvme0n1p9 /lib/modules ext4 defaults,noatime,discard 0 1 /dev/nvme0n1p10 /tmp ext4 defaults,noatime,discard 0 1
/dev/mapper/vg0-usr /usr ext4 defaults,noatime,discard 0 0
/dev/mapper/vg0-home /home ext4 defaults,noatime,discard 0 1
/dev/mapper/vg0-opt /opt ext4 defaults,noatime,discard 0 1
/dev/mapper/vg0-var /var ext4 defaults,noatime,discard 0 1
/dev/mapper/vg1-vm /vm ext4 noauto,noatime,discard 0 1
/dev/cdrom /mnt/cdrom auto rw,exec,noauto,user 0 0
/etc/initramfs.mounts has:
/usr
# ls -l /dev/mapper/vg0-usr
lrwxrwxrwx 1 root root 7 Apr 23 05:56 /dev/mapper/vg0-usr -> ../dm-1
# mount /usr
mount: /usr: /dev/mapper/vg0-usr already mounted or mount point busy.
# df -h /usr
Filesystem Size Used Avail Use% Mounted on
/dev/dm-1 25G 3.2G 20G 14% /usr
Thank you
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
Having /dev/dm-1 mounted on /usr would not be an issue if it was
supposed to be that way; however, nothing in the handbook or
anything else I have read says that is correct. In addition, every
other system I have setup or used always had /usr as the mount point
in the fstab.<br>
<br>
My primary questions are:<br>
* Why is it different this time?<br>
* What changed to make /usr mount the block device?<br>
* Why is the /usr record in the fstab being ignored and being
handled differently that /var, /opt, /home and /vm ?<br>
<br>
Even though everything seems to be work correctly, without a good
and authoritative explanation my confidence level in the stability
is not too high and is preventing me from relying on it as primary
host.<br>
<br>
My concerns about not having a good explanation for why df -h shows
/dev/dm-1 on /usr instead of /dev/mapper/vg0-usr are:<br>
* There could be problems interfacing directly with the block device
(/dev/dm-1) and not the link (/dev/mapper/vg0-usr).<br>
* When it comes time to extend the /usr logical volume and use
commands like lvextend, resize2fs, lvresize and some others it may
cause problems.<br>
* The documentation does not say this is correct, in fact the
documentation specifically says the opposite that the fstab is used
for the mount points.<br>
* It looks like the initramfs is not letting go of the temporary
/usr mount and mounting /usr in the vg0-usr volume group correctly.<br>
<br>
After reinstalling Gentoo with a new liveusb, my system still looks
similar to the way it was before. I started with the existing
partition schema and wiped everything and performed a separate
independent install. I am still not sure why the /dev/dm-1 block
device is mounted on /usr which is not what the fstab is
instructing.<br>
<br>
UUIDs are not being used because the handbook says:<br>
<strong> Important:</strong> UUIDs of the filesystem on a LVM
volume and its LVM snapshots are identical, therefore using UUIDs to
mount LVM volumes should be avoided.<br>
<br>
/etc/fstab:<br>
/dev/nvme0n1p6 /boot ext2
defaults,noatime 0 2<br>
/dev/nvme0n1p7 none swap
sw 0 0<br>
/dev/nvme0n1p8 / ext4
defaults,noatime,discard 0 1<br>
/dev/nvme0n1p9 /lib/modules ext4
defaults,noatime,discard 0 1<br>
/dev/nvme0n1p10 /tmp ext4
defaults,noatime,discard 0 1<br>
/dev/mapper/vg0-usr /usr ext4
defaults,noatime,discard 0 0<br>
/dev/mapper/vg0-home /home ext4
defaults,noatime,discard 0 1<br>
/dev/mapper/vg0-opt /opt ext4
defaults,noatime,discard 0 1<br>
/dev/mapper/vg0-var /var ext4
defaults,noatime,discard 0 1<br>
/dev/mapper/vg1-vm /vm ext4
noauto,noatime,discard 0 1<br>
/dev/cdrom /mnt/cdrom auto
rw,exec,noauto,user 0 0<br>
<br>
/etc/initramfs.mounts has:<br>
/usr<br>
<br>
# ls -l /dev/mapper/vg0-usr<br>
lrwxrwxrwx 1 root root 7 Apr 23 05:56 /dev/mapper/vg0-usr ->
../dm-1<br>
<br>
# mount /usr <br>
mount: /usr: /dev/mapper/vg0-usr already mounted or mount point
busy.<br>
<br>
# df -h /usr <br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/dm-1 25G 3.2G 20G 14% /usr<br>
<br>
Thank you<br>
</body>
</html>
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)