Greetings all;
As usual, the man page may as well be written in swahili. The NDE
syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as
sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of these
to make a single volume that I can then rsynch /home to it, then switch
fstab to mount it as /home on a reboot?
If it works, I'll kill the raid10, reformat it and make it another lvm
for amanda to use.
I am determined to remove the raid10 /home from the list of suspects
causing my system lockups anytime a program such as OpenSCAD or digiKam,
even firefox wants to write a new file to my /home partition. This
delay does not lockup the whole shebang, already open files in other workspaces seem to run at normal speeds, but opening a simple gui file requester to get where to put the file, and possibly rename it, takes anywhere from 30 seconds minimum to around 5 minutes just to draw the requester on screen. And while you are waiting, wondering if you even
pressed the mouse button, there is ZERO acknowledgement the button has
been pushed. The mouse can be moved normally, but until the requester
is fully drawn on screen, no other button clicks are registered.
This software raid10 worked perfectly for buster and bullseye, but its
been nothing but a headache on bookworm. And only one has tried to
help, suggesting strace, but its output is so copious it overflows the
32G of main memory in this machine so I can't go back to the actual
programs start with the trace. I have used it in the past, decade or
more back up the log, and while it was noisy then, this is unusable.
Thanks for help with dmsetup.
Cheers, Gene Heskett.
Greetings all;
As usual, the man page may as well be written in swahili. The NDE syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as
sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of these to make a single volume that I can then rsynch /home to it, then switch fstab
to mount it as /home on a reboot?
I am determined to remove the raid10 /home from the list of suspects
causing my system lockups anytime a program such as OpenSCAD or
digiKam, even firefox wants to write a new file to my /home partition.
nothing but a headache on bookworm. And only one has tried to help, suggesting strace, but its output is so copious it overflows the 32G
of main memory in this machine so I can't go back to the actual
programs start with
Greetings all;
As usual, the man page may as well be written in swahili. The NDE
syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as
sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of these
to make a single volume that I can then rsynch /home to it, then switch
fstab to mount it as /home on a reboot?
If it works, I'll kill the raid10, reformat it and make it another lvm
for amanda to use.
I am determined to remove the raid10 /home from the list of suspects
causing my system lockups anytime a program such as OpenSCAD or digiKam,
even firefox wants to write a new file to my /home partition. This
delay does not lockup the whole shebang, already open files in other workspaces seem to run at normal speeds, but opening a simple gui file requester to get where to put the file, and possibly rename it, takes anywhere from 30 seconds minimum to around 5 minutes just to draw the requester on screen. And while you are waiting, wondering if you even
pressed the mouse button, there is ZERO acknowledgement the button has
been pushed. The mouse can be moved normally, but until the requester
is fully drawn on screen, no other button clicks are registered.
This software raid10 worked perfectly for buster and bullseye, but its
been nothing but a headache on bookworm. And only one has tried to
help, suggesting strace, but its output is so copious it overflows the
32G of main memory in this machine so I can't go back to the actual
programs start with the trace. I have used it in the past, decade or
more back up the log, and while it was noisy then, this is unusable.
Thanks for help with dmsetup.
Cheers, Gene Heskett.
Greetings all;
As usual, the man page may as well be written in swahili. The NDE syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as
sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of these to make a single volume that I can then rsynch /home to it, then switch fstab
to mount it as /home on a reboot?
If it works, I'll kill the raid10, reformat it and make it another lvm for amanda to use.
I am determined to remove the raid10 /home from the list of suspects causing my system lockups anytime a program such as OpenSCAD or digiKam, even
firefox wants to write a new file to my /home partition. This delay does
not lockup the whole shebang, already open files in other workspaces seem to run at normal speeds, but opening a simple gui file requester to get where
to put the file, and possibly rename it, takes anywhere from 30 seconds minimum to around 5 minutes just to draw the requester on screen. And while you are waiting, wondering if you even pressed the mouse button, there is ZERO acknowledgement the button has been pushed. The mouse can be moved normally, but until the requester is fully drawn on screen, no other button clicks are registered.
This software raid10 worked perfectly for buster and bullseye, but its been nothing but a headache on bookworm. And only one has tried to help, suggesting strace, but its output is so copious it overflows the 32G of main memory in this machine so I can't go back to the actual programs start with the trace. I have used it in the past, decade or more back up the log, and while it was noisy then, this is unusable.
Thanks for help with dmsetup.
Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
- Louis D. Brandeis
On Fri, Nov 03, 2023 at 12:27:19PM -0400, gene heskett wrote:ATM its 20% of 1.8T
Greetings all;
As usual, the man page may as well be written in swahili. The NDE syndrome, >> meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as
sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How big is your /home - is it bigger than 2T?
Do you need both drives to provide redundant storage - or is it justDon't intend to in the final config.
a temporary storage place for /home while you get the rest of the RAID devices sorted out?
If I were you, I wouldn't start from here :)
Don't mount them as individual drives.
pvcreate to initialise each physical drive as a drive reserved for lvm
pvcreate /dev/sdc1
pvcreate /dev/sdk1
vgcreate to create a logical volume
vgcreate HomeVolgroup /dev/sdc1 /dev/sdk1
Then use lvcreate to create the logical volume itself.
See also https://wiki.archlinux.org/title/LVM
Use lvm to do this: that's *exactly* what it's designed for - to allow you
to add volumes and extend disk sizes.
This is exactly how partman does it when you install. (In fact, you could do this quite well by rebooting to recovery and using that method to reformat the drives).
How do I create a single managed volume of labels lvm1 and lvm2 of these to >> make a single volume that I can then rsynch /home to it, then switch fstab
See above:
All the very best, as ever,
Andy
[amacater@debian.org]
to mount it as /home on a reboot?
If it works, I'll kill the raid10, reformat it and make it another lvm for >> amanda to use.
I am determined to remove the raid10 /home from the list of suspects causing >> my system lockups anytime a program such as OpenSCAD or digiKam, even
firefox wants to write a new file to my /home partition. This delay does
not lockup the whole shebang, already open files in other workspaces seem to >> run at normal speeds, but opening a simple gui file requester to get where >> to put the file, and possibly rename it, takes anywhere from 30 seconds
minimum to around 5 minutes just to draw the requester on screen. And while >> you are waiting, wondering if you even pressed the mouse button, there is
ZERO acknowledgement the button has been pushed. The mouse can be moved
normally, but until the requester is fully drawn on screen, no other button >> clicks are registered.
This software raid10 worked perfectly for buster and bullseye, but its been >> nothing but a headache on bookworm. And only one has tried to help,
suggesting strace, but its output is so copious it overflows the 32G of main >> memory in this machine so I can't go back to the actual programs start with >> the trace. I have used it in the past, decade or more back up the log, and >> while it was noisy then, this is unusable.
Thanks for help with dmsetup.
Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
- Louis D. Brandeis
.
On 11/3/23 09:27, gene heskett wrote:
Greetings all;
As usual, the man page may as well be written in swahili. The NDE
syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated
as sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of
these to make a single volume that I can then rsynch /home to it, then
switch fstab to mount it as /home on a reboot?
If it works, I'll kill the raid10, reformat it and make it another lvm
for amanda to use.
I am determined to remove the raid10 /home from the list of suspects
causing my system lockups anytime a program such as OpenSCAD or
digiKam, even firefox wants to write a new file to my /home
partition. This delay does not lockup the whole shebang, already open
files in other workspaces seem to run at normal speeds, but opening a
simple gui file requester to get where to put the file, and possibly
rename it, takes anywhere from 30 seconds minimum to around 5 minutes
just to draw the requester on screen. And while you are waiting,
wondering if you even pressed the mouse button, there is ZERO
acknowledgement the button has been pushed. The mouse can be moved
normally, but until the requester is fully drawn on screen, no other
button clicks are registered.
This software raid10 worked perfectly for buster and bullseye, but its
been nothing but a headache on bookworm. And only one has tried to
help, suggesting strace, but its output is so copious it overflows the
32G of main memory in this machine so I can't go back to the actual
programs start with the trace. I have used it in the past, decade or
more back up the log, and while it was noisy then, this is unusable.
Thanks for help with dmsetup.
Cheers, Gene Heskett.
Not LVM, but FWIW I previously had my data on 2 @ 1.5 TB HDD md RAID1
LUKS ext4 on Debian 9:
October 22, 2017
1. Install StarTech HBA and Seagate 1.5 TB drives:
2. Set up mdadm RAID1:
    https://raid.wiki.kernel.org/index.php/RAID_setup
    2017-10-22 22:40:19 root@po ~
    # dd if=/dev/zero of=/dev/sdb count=2048
    2048+0 records in
    2048+0 records out
    1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0689901 s, 15.2 MB/s
    2017-10-22 22:46:16 root@po ~
    # dd if=/dev/zero of=/dev/sdc count=2048
    2048+0 records in
    2048+0 records out
    1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0501681 s, 20.9 MB/s
    2017-10-22 22:55:48 root@po ~
    # mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/disk/by-id/ata-ST31500341AS_redacted1 /dev/disk/by-id/ata-ST31500341AS_redacted2
    mdadm: Note: this array has metadata at the start and
       may not be suitable as a boot device. If you plan to
       store '/boot' on this device please ensure that
       your boot-loader understands md/v1.x metadata, or use
       --metadata=0.90
    mdadm: size set to 1465007488K
    mdadm: automatically enabling write-intent bitmap on large array
    Continue creating array? yes
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    2017-10-22 22:57:29 root@po ~
    # mdadm --detail --scan
    ARRAY /dev/md0 metadata=1.2 name=po:0 UUID=2fb7086d:ccedcd0c:b5c286e2:3eb96d45
    2017-10-22 23:00:37 root@po ~
    # mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    2017-10-22 23:06:45 root@po ~
    # ll /dev/md0
    brw-rw---- 1 root disk 9, 0 2017/10/22 22:56:59 /dev/md0
    2017-10-22 23:06:48 root@po ~
    # cryptsetup luksFormat /dev/md0
    WARNING!
    ========
    This will overwrite data on /dev/md0 irrevocably.
    Are you sure? (Type uppercase yes): YES
    Enter passphrase:
    Verify passphrase:
    2017-10-22 23:08:09 root@po ~
    # vi /etc/crypttab
       md0_crypt   /dev/md0                   none       luks
    2017-10-22 23:09:19 root@po ~
    # cryptdisks_start md0_crypt
    [....] Starting crypto disk...[info] md0_crypt (starting)...
    Please unlock disk /dev/md0 (md0_crypt): ********
    [ ok rypt (started)...done.
    2017-10-22 23:14:45 root@po ~
    # mkfs.ext4 -L mirror -v /dev/mapper/md0_crypt
    mke2fs 1.43.4 (31-Jan-2017)
    fs_types for mke2fs.conf resolution: 'ext4'
    Filesystem label=mirror
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    91570176 inodes, 366251360 blocks
    18312568 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2514485248
    11178 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Filesystem UUID: d9054ef2-a68d-40fd-a63c-0dbef2d27455
    Superblock backups stored on blocks:
       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
       4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
       102400000, 214990848
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (262144 blocks): done
    Writing superblocks and filesystem accounting information: done
    2017-10-22 23:16:13 root@po ~
    # mkdir /mnt/md0
    2017-10-22 23:16:18 root@po ~
    # vi /etc/fstab
       /dev/mapper/md0_crypt               /mnt/md0   ext4
defaults           0   2
    2017-10-22 23:17:51 root@po ~
    # mount /mnt/md0
    2017-10-22 23:18:12 root@po ~
    # mount | grep md0
    /dev/mapper/md0_crypt on /mnt/md0 type ext4 (rw,relatime,data=ordered)
    2017-10-22 23:22:49 root@po ~
    # rsync -a --progress --stats root@dipsy:/mnt/i3000d/ /mnt/md0/
    receiving incremental file list
    ./
    approx/
    approx/debian/
    ...
   Too long, too late, too noisy -- ^C out..
October 23, 2017
1. md0 not assemling at boot. STFW
    https://unix.stackexchange.com/questions/210416/new-raid-array-will-not-auto-assemble-leads-to-boot-problems
   Run commands:
    dpkg-reconfigure mdadm
    update-initramfs -u
2. Continue moving data:
    Number of files: 123,773 (reg: 109,013, dir: 14,688, link: 72)
    Number of created files: 95,179 (reg: 87,324, dir: 7,854, link: 1)
    Number of deleted files: 0
    Number of regular files transferred: 87,327
    Total file size: 807,206,168,902 bytes
    Total transferred file size: 775,906,834,780 bytes
    Literal data: 775,901,990,164 bytes
    Matched data: 4,844,616 bytes
    File list size: 2,599,972
    File list generation time: 0.001 seconds
    File list transfer time: 0.000 seconds
    Total bytes sent: 1,733,657
    Total bytes received: 776,098,128,065
    sent 1,733,657 bytes received 776,098,128,065 bytes 43,856,121.93
bytes/sec
    total size is 807,206,168,902 speedup is 1.04
    2017-10-23 14:58:48 root@po ~
    # df /mnt/md0
    Filesystem            1K-blocks     Used Available Use% Mounted on
    /dev/mapper/md0_crypt 1440961520 788658044 579036820 58% /mnt/md0
    2017-10-23 14:59:03 root@po ~
    # cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6]
[raid5] [raid4] [raid10]
    md0 : active raid1 sdc[1] sdb[0]
         1465007488 blocks super 1.2 [2/2] [UU]
         bitmap: 0/11 pages [0KB], 65536KB chunk
    unused devices: <none>
October 23, 2017
1. Check for mdadm/kernel timeout mismatch:
    https://raid.wiki.kernel.org/index.php/Timeout_Mismatch
    2017-10-23 19:49:29 root@po ~
    # smartctl -l scterc /dev/disk/by-id/ata-ST31500341AS_redacted1
    smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
    Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
    SCT Error Recovery Control:
          Read: Disabled
         Write: Disabled
    2017-10-23 19:50:03 root@po ~
    # smartctl -l scterc /dev/disk/by-id/ata-ST31500341AS_redacted2
    smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
    Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
    SCT Error Recovery Control:
          Read: Disabled
         Write: Disabled
2. Create Perl script to set SCT ERror Recovery Control read and write
   timeouts:
    2017-10-23 20:09:04 root@po ~
    # vi /usr/local/sbin/set-md0-scterc
    #!/usr/bin/env perl
    use strict;
    use warnings;
    my @drives = qw(
       /dev/disk/by-id/ata-ST31500341AS_redacted1
       /dev/disk/by-id/ata-ST31500341AS_redacted2
    );
    for my $drive (@drives) {
       my @line = (
       '/usr/sbin/smartctl',
       '-l',
       'scterc,70,70',
       $drive
       );
       print "+ @line\n";
       system @line and warn $!;
    }
    2017-10-23 20:13:26 root@po ~
    # chmod +x /usr/local/sbin/set-md0-scterc
    2017-10-23 20:17:35 root@po ~
    # set-md0-scterc
    + /usr/sbin/smartctl -l scterc,70,70 /dev/disk/by-id/ata-ST31500341AS_redacted1
    smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
    Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
    SCT Error Recovery Control set to:
          Read:    70 (7.0 seconds)
         Write:    70 (7.0 seconds)
    + /usr/sbin/smartctl -l scterc,70,70 /dev/disk/by-id/ata-ST31500341AS_redacted2
    smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
    Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
    SCT Error Recovery Control set to:
          Read:    70 (7.0 seconds)
         Write:    70 (7cp laalaa-var-local-cvs.meta-csv po:.
         laalaa-var-local-cvs.meta-csv
100% 2517KB 79.2MB/s  00:00
         0 seconds)
3. Create a crontab entry to run script at boot:
    2017-10-23 20:26:26 root@po ~
    # crontab -e -u root
       @reboot /usr/local/sbin/set-md0-scterc
    no crontab for root - using an empty one
    crontab: installing new crontab
   Disable timeouts, verify, then do a warm reboot -- timeouts set
   for md0 drives.
David
.
On 11/3/23 17:41, David Christensen wrote:
On 11/3/23 09:27, gene heskett wrote:
Greetings all;
As usual, the man page may as well be written in swahili. The NDE
syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated
as sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2. >>> Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of
these to make a single volume that I can then rsynch /home to it,
then switch fstab to mount it as /home on a reboot?
If it works, I'll kill the raid10, reformat it and make it another
lvm for amanda to use.
I am determined to remove the raid10 /home from the list of suspects
causing my system lockups anytime a program such as OpenSCAD or
digiKam, even firefox wants to write a new file to my /home
partition. This delay does not lockup the whole shebang, already
open files in other workspaces seem to run at normal speeds, but
opening a simple gui file requester to get where to put the file, and
possibly rename it, takes anywhere from 30 seconds minimum to around
5 minutes just to draw the requester on screen. And while you are
waiting, wondering if you even pressed the mouse button, there is
ZERO acknowledgement the button has been pushed. The mouse can be
moved normally, but until the requester is fully drawn on screen, no
other button clicks are registered.
This software raid10 worked perfectly for buster and bullseye, but
its been nothing but a headache on bookworm. And only one has tried
to help, suggesting strace, but its output is so copious it overflows
the 32G of main memory in this machine so I can't go back to the
actual programs start with the trace. I have used it in the past,
decade or more back up the log, and while it was noisy then, this is
unusable.
<snip>
Not LVM, but FWIW I previously had my data on 2 @ 1.5 TB HDD md RAID1
LUKS ext4 on Debian 9:
<snip>
Thank you David, printed for study, but looks like it still uses mdadm
which I wanted to avoid in order to make the test isolation complete.
Thanks for help with dmsetup.
On Fri, Nov 03, 2023 at 12:27:19PM -0400, gene heskett wrote:I'v got to the above point but the first example that looked good
Greetings all;
As usual, the man page may as well be written in swahili. The NDE syndrome, >> meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as
sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How big is your /home - is it bigger than 2T?
Do you need both drives to provide redundant storage - or is it just
a temporary storage place for /home while you get the rest of the RAID devices sorted out?
If I were you, I wouldn't start from here :)
Don't mount them as individual drives.
pvcreate to initialise each physical drive as a drive reserved for lvm
pvcreate /dev/sdc1
pvcreate /dev/sdk1
vgcreate to create a logical volume
vgcreate HomeVolgroup /dev/sdc1 /dev/sdk1
Then use lvcreate to create the logical volume itself.
See also https://wiki.archlinux.org/title/LVMWhich also assumes an innate familiarity with all this that I don't have
Use lvm to do this: that's *exactly* what it's designed for - to allow you
to add volumes and extend disk sizes.
This is exactly how partman does it when you install. (In fact, you could do this quite well by rebooting to recovery and using that method to reformat the drives).
How do I create a single managed volume of labels lvm1 and lvm2 of these to >> make a single volume that I can then rsynch /home to it, then switch fstab
See above:
All the very best, as ever,
Andy
[amacater@debian.org]
Hi Gene,
On Fri, Nov 03, 2023 at 12:27:19PM -0400, gene heskett wrote:
Thanks for help with dmsetup.
dmsetup is very much the wrong approach for you - it's too
low-level.
LVM alone is probably not the best idea either. For your use case as
I understand it, mdraid in RAID1 or RAID10 is probably the best
solution.
I regret I am not able to assist you with the problems you have with
your existing RAID10. Changing it for just LVM, or trying to do it
"by hand" with dmsetup are likely to be mistakes however.
Maybe it is time to just buy a "black box" NAS device and make all
this someone else's problem in return for money. It's not the way I
go, but you have had a lot of trouble getting your own mdraid to
work.
Thanks,
Andy
I'v got to the above point but the first example that looked good created a 100% allocated, no free space "homevol"
So I used gparted to delete the partitions & reformat them to ext4 again, pvcreated them again an vgcreateded it again, getting:
On Sat, Nov 04, 2023 at 07:46:09AM -0400, gene heskett wrote:
[...]
I'v got to the above point but the first example that looked good created a >> 100% allocated, no free space "homevol"
So I used gparted to delete the partitions & reformat them to ext4 again,
pvcreated them again an vgcreateded it again, getting:
Sorry, Gene -- I fear you have it backwards. The LVM stuff happens *below* the file system: you first add physical volumes (PV) to a volume group
(think a "pool"). From that you "cut out" logical volumes (LV), which
are just bunches of blocks which show themselves to the OS as "block
devices" (/dev/mapper/foo, typically).
On top of that you can put a file system (as you can on any "bunch of blocks",
i.e. a block device or file).
Perhaps having a mental model the docs make more sense.
Cheers
Indeed it does clarify the mechanics. thank you. Now do I have to zero them first before I can then create (pvcreate) them,
On Sat, Nov 04, 2023 at 01:20:08PM -0400, gene heskett wrote:
[...]
Indeed it does clarify the mechanics. thank you. Now do I have to
zero them first before I can then create (pvcreate) them,
Not necessarily. Unless, of course, there are sensitive data on them.
The process would go roughly:
# put the necessary PV metadata on your raw devices
pvcreate /dev/foo1 /dev/foo2 ...
# make them to a volume group named my-volgroup
vgcreate my-volgroup /dev/foo1 /dev/foo2 ...
# cut out a logical volume from that, named my-logvol
lvcreate --name my-logvol my-volgroup
# put a file system on that logical volume
mkfs.ext4 /dev/mapper/my-logvol
# mount it
mount /dev/mapper/my-logvol /home
Now convince your boot setup to add the logical vols
and mount them (this somehow involves fstab).
Perhaps this [1] page is enlightening (just disregard the
talk about vagrant(. I'm not yet quite sure you really
want this, but hey. Learning new tricks is what keeps one
happy.
In my case I actually have a volume group (spanning a single
physical device), but the use case is different: the physical
device is encrypted (laptops get lost) and I wanted to have
several partitions on it (and still move space from the one
to the other in a pinch).
Cheers & enjoy
[1] https://linuxhandbook.com/lvm-guide/
On Sat, Nov 04, 2023 at 07:46:09AM -0400, gene heskett wrote:
[...]
I'v got to the above point but the first example that looked good created a >> 100% allocated, no free space "homevol"
So I used gparted to delete the partitions & reformat them to ext4 again,
pvcreated them again an vgcreateded it again, getting:
Sorry, Gene -- I fear you have it backwards. The LVM stuff happens *below* the file system: you first add physical volumes (PV) to a volume group
(think a "pool"). From that you "cut out" logical volumes (LV), which
are just bunches of blocks which show themselves to the OS as "block
devices" (/dev/mapper/foo, typically).
On top of that you can put a file system (as you can on any "bunch of blocks",
i.e. a block device or file).
Perhaps having a mental model the docs make more sense.
Cheers
... my only previous experience with logical volumes 20 years ago
cost me dearly in terms of lost, irreplaceable data, like the only
pictures of my first wife ...
On 11/4/23 05:39, Andy Smith wrote:
Maybe it is time to just buy a "black box" NAS device and make all
this someone else's problem in return for money. It's not the way I
go, but you have had a lot of trouble getting your own mdraid to
work.
Good advice maybe, Andy Smith, thank you, but that puts it all at the
mercy of a $5 cable. Not exactly my cup of tea. Since I'm into
designing stuff in OpenSCAD, 3d printing the output, made into gcode
with Cura, and none of the common tools for making gcode to drive
the printers, or the printers own firmware, has learned how to roll
up the code into repetitive loops, instead generating step by step
printer instructions, so even the simplest part is half a gig of
g-code. So the data is growing like a cancer. Really complex parts
might be 50 gigs of g-code and takes the printer a week or more to
make.
G-code, properly written is like a .pdf, its the best compression we
have. I write it by hand for linuxcnc and have one 90 LOC file that
takes the machine 3 days to run. Remove the comments and it fits on
one side of one sheet of paper.
I've found I could temporarily unload the 6 port on the mobo
controller down to 1, the boot disk containing everything but home,
which would give me enough ports to build another raid10 if I can
conjure up enough sata power plugs, which I have some spares of
molex to sata splitters. Or does pvcreate automatically null the
formatting. I have enough of the 2T gigastones to do that, but will
that then fix my lack of instant raid access? That would leave me
with a blank home I could then copy to the new raid10 which would
give me a raid10 twice as big as now.
To complicate that, I also have a wd 2T NVMe that has never been
plugged in but I'm understanding that is not a mod, but a whole new
install, and another 22 installs disaster before it works this well,
unless the installer now has some manners or I unplug all usb stuff
except the keyboard/mouse buttons. that possibly reduces the sata
count because it would become the boot drive. I can do the usb
cleanup long enough to do the install now that I know about it.
Probably should download and burn the latest netinstall image first
though.
Perhaps my constant mewling about the broken installer has donegood? Like asking me yes/no do I want brltty and cura or whatever
some
the hell it is that yells out every keypress from any speakers it
can find and locking up the machine for the duration of the yell
just because it surveyed the usb stuff and found a usb-serial
adapter connected to a cm11a X10 controller so it ASSUMES I'm blind
and installs that stuff w/o asking me. IDK. I don't want to get into
that situation ever again because if you nuke brltty and cura so you
can work in peace, the sob won't reboot, grub gets stuck looking for
them and won't proceed with the boot, forcing yet another
re-install. Finally, may even have been you, someone told me to
unplug the usb stuff FIRST, rebooting problem solved. Sorry, but
this gets me started on a rant about a broken installer.
I have 5 of those 2T drives. And another narrow PCIe sata card with
all 16 ports populated. That may serve as the foundation storage so
I can restart amanda and have some backups. A 2T raid10 for /home
out of 4 of them would satisfy the storage needs for a while, maybe
even for the rest of my time here. And an expandable linear lvm for
amanda would be a treat. If its dependable. A maintenance PITA if
not.
WDYT?
On 11/4/23 04:46, gene heskett wrote:
... my only previous experience with logical volumes 20 years ago
cost me dearly in terms of lost, irreplaceable data, like the only
pictures of my first wife ...
On 11/4/23 05:22, gene heskett wrote:
On 11/4/23 05:39, Andy Smith wrote:
Maybe it is time to just buy a "black box" NAS device and make all
this someone else's problem in return for money. It's not the way I
go, but you have had a lot of trouble getting your own mdraid to work.
Good advice maybe, Andy Smith, thank you, but that puts it all at the
mercy of a $5 cable. Not exactly my cup of tea. Since I'm into
designing stuff in OpenSCAD, 3d printing the output, made into gcode
 with Cura, and none of the common tools for making gcode to drive the
printers, or the printers own firmware, has learned how to roll up the
code into repetitive loops, instead generating step by step printer
instructions, so even the simplest part is half a gig of g-code. So
the data is growing like a cancer. Really complex parts might be 50
gigs of g-code and takes the printer a week or more to make.
G-code, properly written is like a .pdf, its the best compression we
 have. I write it by hand for linuxcnc and have one 90 LOC file that
 takes the machine 3 days to run. Remove the comments and it fits on
one side of one sheet of paper.
On 11/4/23 10:20, gene heskett wrote:
I've found I could temporarily unload the 6 port on the mobo
controller down to 1, the boot disk containing everything but home,
which would give me enough ports to build another raid10 if I can
conjure up enough sata power plugs, which I have some spares of
molex to sata splitters. Or does pvcreate automatically null the
formatting. I have enough of the 2T gigastones to do that, but will
that then fix my lack of instant raid access? That would leave me with
a blank home I could then copy to the new raid10 which would give me a
raid10 twice as big as now.
To complicate that, I also have a wd 2T NVMe that has never been
plugged in but I'm understanding that is not a mod, but a whole new
install, and another 22 installs disaster before it works this well,
unless the installer now has some manners or I unplug all usb stuff
except the keyboard/mouse buttons. that possibly reduces the sata
count because it would become the boot drive. I can do the usb cleanup
long enough to do the install now that I know about it. Probably
should download and burn the latest netinstall image first though.
Perhaps my constant mewling about the broken installer has done somegood? Like asking me yes/no do I want brltty and cura or whatever
the hell it is that yells out every keypress from any speakers it
can find and locking up the machine for the duration of the yell
just because it surveyed the usb stuff and found a usb-serial
adapter connected to a cm11a X10 controller so it ASSUMES I'm blind
and installs that stuff w/o asking me. IDK. I don't want to get into
that situation ever again because if you nuke brltty and cura so you
can work in peace, the sob won't reboot, grub gets stuck looking for
them and won't proceed with the boot, forcing yet another
re-install. Finally, may even have been you, someone told me to
unplug the usb stuff FIRST, rebooting problem solved. Sorry, but
this gets me started on a rant about a broken installer.
I have 5 of those 2T drives. And another narrow PCIe sata card with
all 16 ports populated. That may serve as the foundation storage so
I can restart amanda and have some backups. A 2T raid10 for /home
out of 4 of them would satisfy the storage needs for a while, maybe
even for the rest of my time here. And an expandable linear lvm for
amanda would be a treat. If its dependable. A maintenance PITA if
not.
WDYT?
Trying to do too much with too little equipment is a recipe for
disaster. Been there, done that, lost data.
My computing life became more reliable and less stressful when I bought additional computers, additional drives, and mobile racks:
https://www.startech.com/en-us/hdd/drw150satbk
https://www.startech.com/en-us/hdd/hsb220sat25b
https://www.startech.com/en-us/hdd/s25slotr
I suggest that you give the Asus a rest, buy or build another computer, install one small SSD (and mobile rack), install Debian, install one
very large HDD (with mobile rack), install Amanda, and back up
everything over the network. Once that is working, buy another HDD
(with mobile rack), either swap HDD's or duplicate the first HDD to the second HDD, store one HDD off-site, and repeat monthly.
In any case, burn your most valuable data to optical discs regularly.
David
Does anyone have experience with M-Disc media?No. I trust a little bit more in RAID
On 11/4/23 17:38, David Christensen wrote:
In any case, burn your most valuable data to optical discs regularly.
Not great advice unless you lock the resultant dvd away from all room lighting. I have 3 100 disk spindles of dvd's bought years ago, that are
no longer recognized in any of the 4 or 5 dvd writers I have, but one
box of rewritables about the same age, stored n a light tight cardboard
box, will likely outlast me. Some of them have been wiped and reused several times. Those on a spindle, with a clear dust cover letting the
light into the edges of the stack of disks?? Flaky in 5 years, gone w/o
a trace in 10, the drives don't see them at all. They do spin up for a
minute trying, reseeking and re-reading but there is nothing readable to
tell the drive what this disk needs in order to be written to, left in
the disk starter track at the center of the disk.
Lesson learnt, do not use optical media for long term storage unless
stored in tin boxes like AOL gave away billions of 20 years or more ago.
On 11/4/23 15:26, gene heskett wrote:
On 11/4/23 17:38, David Christensen wrote:
In any case, burn your most valuable data to optical discs regularly.
Not great advice unless you lock the resultant dvd away from all room
lighting. I have 3 100 disk spindles of dvd's bought years ago, that
are no longer recognized in any of the 4 or 5 dvd writers I have, but
one box of rewritables about the same age, stored n a light tight
cardboard box, will likely outlast me. Some of them have been wiped
and reused several times. Those on a spindle, with a clear dust cover
letting the light into the edges of the stack of disks?? Flaky in 5
years, gone w/o a trace in 10, the drives don't see them at all. They
do spin up for a minute trying, reseeking and re-reading but there is
nothing readable to tell the drive what this disk needs in order to be
written to, left in the disk starter track at the center of the disk.
Lesson learnt, do not use optical media for long term storage unless
stored in tin boxes like AOL gave away billions of 20 years or more ago.
I have been burning archive DVD-R discs for ~14 years and storing them
in a drawer (e.g. darkness). I checked the oldest just now and it reads okay.
I have heard of CD discs disintegrating if the lacquer is scratched:
https://en.wikipedia.org/wiki/Disc_rot
I have heard that RW media has a shorter lifespan than R media.
Does anyone have experience with M-Disc media?
https://en.wikipedia.org/wiki/M-DISC
David
.
On 11/4/23 17:55, gene heskett wrote:
FWIW the rw's I have and that continue to work, are Sony DVD+RW, well
over 5 years old now. I understand there is a DVD-RW but I've no
experience with them. Today my objection is the size. In comparison
to a system driving 3d printers with gcode from Cura-5.4 that is not
rolled up into subroutine loops, I have some of the more complex and
large parts part files that will not fit on a dvd. So it simply
impractical for me to back up to a measly 4.7Gig dvd.
That's why they invented Blu-ray:
   https://en.wikipedia.org/wiki/Blu-ray
   25 GB (single-layer)
   50, 66 GB (dual-layer)
   100, 128 GB (BDXL)
   (Up to four layers are possible in a standard form BD)
David
.
FWIW the rw's I have and that continue to work, are Sony DVD+RW, well
over 5 years old now. I understand there is a DVD-RW but I've no
experience with them. Today my objection is the size. In comparison to
a system driving 3d printers with gcode from Cura-5.4 that is not rolled
up into subroutine loops, I have some of the more complex and large
parts part files that will not fit on a dvd. So it simply impractical
for me to back up to a measly 4.7Gig dvd.
On 11/4/23 23:15, David Christensen wrote:
On 11/4/23 17:55, gene heskett wrote:Shudder. Anything mechanical can be destroyed by a smoke particle 100x
FWIW the rw's I have and that continue to work, are Sony DVD+RW, well
over 5 years old now. I understand there is a DVD-RW but I've no
experience with them. Today my objection is the size. In comparison
to a system driving 3d printers with gcode from Cura-5.4 that is not
rolled up into subroutine loops, I have some of the more complex and
large parts part files that will not fit on a dvd. So it simply
impractical for me to back up to a measly 4.7Gig dvd.
That's why they invented Blu-ray:
    https://en.wikipedia.org/wiki/Blu-ray
    25 GB (single-layer)
    50, 66 GB (dual-layer)
    100, 128 GB (BDXL)
to small to be seen with a good eye. I am a CET & Electronics in general
I understand the physics of, and in electronics the only thing moving is
a few electrons here or there. As long as the voltage does not force an electron thru the oxide layer that is the capacitors insulation, forming
a leakage path that avalanches thru the oxide film and essentially
destroys the device, there is no physical reason that it will not
continue to do its jobs for hundreds or thousands of years. It will be external environmental effects that will eventually reach the chip and byproducts of the humidity let in by the breach of the package sealing
that finally destroys it.
The size of a bit that is detectable on a disk is determined by the
wavelegth of the light reading that bit, cd's were designed with the IR lasers of the day, which emmit light in the 1100 nanometer range. Far infrared IOW. DVD's were made possible with a shorter visible light
laser, then blue rays got that down to abut 400 nanometers. The next gen
of those will have a uv laser but we'll have to invent it first. But
part of that problem is that decent optical glass for the lenses does
not pass UV to a usable amount. Plastic lets it blast on thru but can we
make plastic lenses that precisely for the price bleeding edge users
will pay? IDK.
I have 3 100 disk spindles of dvd's bought years ago, that are
no longer recognized in any of the 4 or 5 dvd writers I have, but one box of rewritables about the same age, stored n a light tight cardboard box, will likely outlast me.
Lesson learnt, do not use optical media for long term storage unless
stored in tin boxes like AOL gave away billions of 20 years or more ago.
I have been burning archive DVD-R discs for ~14 years and storing them in a drawer (e.g. darkness). I checked the oldest just now and it reads okay.
I have heard of CD discs disintegrating if the lacquer is scratched: https://en.wikipedia.org/wiki/Disc_rot
I have heard that RW media has a shorter lifespan than R media.
Does anyone have experience with M-Disc media?
On 11/4/23 21:05, gene heskett wrote:
On 11/4/23 23:15, David Christensen wrote:
On 11/4/23 17:55, gene heskett wrote:Shudder. Anything mechanical can be destroyed by a smoke particle 100x
FWIW the rw's I have and that continue to work, are Sony DVD+RW,
well over 5 years old now. I understand there is a DVD-RW but I've
no experience with them. Today my objection is the size. In
comparison to a system driving 3d printers with gcode from Cura-5.4
that is not rolled up into subroutine loops, I have some of the more
complex and large parts part files that will not fit on a dvd. So it
simply impractical for me to back up to a measly 4.7Gig dvd.
That's why they invented Blu-ray:
    https://en.wikipedia.org/wiki/Blu-ray
    25 GB (single-layer)
    50, 66 GB (dual-layer)
    100, 128 GB (BDXL)
to small to be seen with a good eye. I am a CET & Electronics in
general I understand the physics of, and in electronics the only thing
moving is a few electrons here or there. As long as the voltage does
not force an electron thru the oxide layer that is the capacitors
insulation, forming a leakage path that avalanches thru the oxide film
and essentially destroys the device, there is no physical reason that
it will not continue to do its jobs for hundreds or thousands of
years. It will be external environmental effects that will eventually
reach the chip and byproducts of the humidity let in by the breach of
the package sealing that finally destroys it.
The size of a bit that is detectable on a disk is determined by the
wavelegth of the light reading that bit, cd's were designed with the
IR lasers of the day, which emmit light in the 1100 nanometer range.
Far infrared IOW. DVD's were made possible with a shorter visible
light laser, then blue rays got that down to abut 400 nanometers. The
next gen of those will have a uv laser but we'll have to invent it
first. But part of that problem is that decent optical glass for the
lenses does not pass UV to a usable amount. Plastic lets it blast on
thru but can we make plastic lenses that precisely for the price
bleeding edge users will pay? IDK.
Interesting tangent.
The point I was trying to make is that proper disaster preparedness involves defenses in depth. AFAIK your data and your backups are on the same computer and you have no other recent backups or archives. If
true, then, as you already know, the computer is a single point of
failure that could destroy both data and backups.
And, now you are touching HBA's, touching drives, and issuing root
commands that are in direct proximity to your data and backups. As you already know, human error is the most common failure mode. I am worried that you are going to make a mistake and suffer a data disaster (partial
or total). That is why I suggested that you give the Asus a rest and
build a backup server now. If you then trash the Asus, recovery will be possible. A duplicate set of backups is wise in case something happens
to the primary backups (notably, human error during recovery).
DavidAmanda is now been able the use big disk storage for a couple decades,
.
On 11/5/23 01:46, David Christensen wrote:
I am
worried that you are going to make a mistake and suffer a data
disaster (partial or total). That is why I suggested that you give
the Asus a rest and build a backup server now.
I'm also into 3d
printers, and that has made me fam with the arm64 sbc cards such as the bananapi-m5 which has 4 ub3 ports on a 2GHz 4 core cpu. Startech makes a
usb3 to sata adapter that can do 500M/sec to an SSD. I am doing it on an rpi4b. So I am tempted to build my own NAS by using all 4 of those
ports to hook 4 of these 2T gigastones up as a 4G raid10. Run it
headless by an ssh login, setup amanda server on it, setup amanda-client
on the rest, setup amanda to do any compression on the clients which are
fast enough to do it and let the pi handle the actual storage, including
its database which makes a recovery a matter to telling it which file
and how old. I normally setup for 60 to 90 days of retention. It won't
be fast but it will be isolated from anything that fails on the rest of
my net,
On 11/5/23 01:47, gene heskett wrote:
On 11/5/23 01:46, David Christensen wrote:
I am worried that you are going to make a mistake and suffer a data
disaster (partial or total). That is why I suggested that you give
the Asus a rest and build a backup server now.
I'm also into 3d printers, and that has made me fam with the arm64 sbc
cards such as the bananapi-m5 which has 4 ub3 ports on a 2GHz 4 core
cpu. Startech makes a usb3 to sata adapter that can do 500M/sec to an
SSD. I am doing it on an rpi4b. So I am tempted to build my own NAS
by using all 4 of those ports to hook 4 of these 2T gigastones up as a
4G raid10. Run it headless by an ssh login, setup amanda server on it,
setup amanda-client on the rest, setup amanda to do any compression on
the clients which are fast enough to do it and let the pi handle the
actual storage, including its database which makes a recovery a matter
to telling it which file and how old. I normally setup for 60 to 90
days of retention. It won't be fast but it will be isolated from
anything that fails on the rest of my net,
Wow! I got Gene to consider my suggestion! :-)
David
On 11/5/23 01:04, Thomas Schmitt wrote:
Lesson learnt: Never overwrite the two youngest backups.
I try to use the term "backup" to mean a data copying process whereby
older data is overwritten by newer data.
I try to use the term "archive' to mean a data copying process
whereby the copy is never modified or erased.
David Christensen wrote:
I have been burning archive DVD-R discs for ~14 years and storing them in a >> drawer (e.g. darkness). I checked the oldest just now and it reads okay.
That's my experience too.
I check by MD5 which are stored on the medium
together with the data. If a medium turns out unreadable then in nearly
all cases directly after burning.
Lesson learnt: Never overwrite the two youngest backups.
I have heard of CD discs disintegrating if the lacquer is scratched:
https://en.wikipedia.org/wiki/Disc_rot
This never hit me. But i keep my media away from high moisture, corroding chemicals, and abrasive substances.
I have heard that RW media has a shorter lifespan than R media.
Not to my experience. I have 4x CD-RW from 2002 (*), DVD+RW from 2004, and BD-RE from 2008 which still work for backups.
From time to time a medium
dies during writing. This seems not to be closely related to age, though.
(*) I have older 2x CD-RW, but no burner any more which would accept them.
They can be still read.
Does anyone have experience with M-Disc media?
Only by reports of libburn users which say that M-Disc works like the
other media. Their durability can hardly be evaluated, given that normal media seem to last for more than 2 decades without much problem.
I think the better alternative for archived data would be to make several identical copies and to checkread them in intervals of a few years. As
soon as one of the copies shows problems, make new copies of the healthy ones. Payload checksums on the medium give extra trust,
although i only
once in my life watched a DVD giving bad data without an SCSI error.
That was reproducible with only a single drive. All others either read
the affected 32 KiB chunk correctly or threw error. Having more than one drive helps a lot when the read quality is on the edge.
A sincere rescue effort would look like:
xorriso -outdev /dev/sr0 \
-check_media use=outdev \
what=disc \
time_limit=7200 \
data_to="$HOME"/sr0.image \
sector_map="$HOME"/sr0.sector_map \
--
After 7200 seconds the attempt will end even if not finished.
If you want to abort earlier, do not press Ctrl+C but rather do
touch /var/opt/xorriso/do_abort_check_media
The disc image will emerge as file "$HOME"/sr0.image .
The file "$HOME"/sr0.sector_map will record which blocks could be read without SCSI error. If the run is repeated, then only the missing blocks
will be attempted to be read.
One may repeat with the same drive or better with different ones.
The file "$HOME"/sr0.image and "$HOME"/sr0.sector_map may be carried to
other computers to continue the rescue attempt with their drives.
If there are other partly damaged identical copies of the archive medium, then one may use them too with the same sr0.image and sr0.sector_map
files.
If the image is an ISO 9660 filesystem made by xorriso with option -for_backup and thus contains MD5s, then one may check the resulting
image by:
xorriso -for_backup -indev "$HOME"/sr0.image -check_media --
Any protest would indicate that the rescue attempt was not successful.
If the direcory tree is undamaged, one may check the data files by
xorriso -for_backup -indev "$HOME"/sr0.image -check_md5_r sorry / --
to get the paths of files with damaged content.
Other than with most rescue tools, the read operations of xorriso on an optical drive will be done by direct SCSI commands, not by the Linux block layer. This has the advantage that error messages are more specific than
just "i/o error" and that no inappropriate reading ahead will happen. Such reading ahead causes the old TAO CD bug of Linux which gave birth to interesting urban legends about a "fuzzy end" of data CDs.
David Christensen <dpchrist@holgerdanske.com> wrote:
On 11/5/23 01:04, Thomas Schmitt wrote:
Lesson learnt: Never overwrite the two youngest backups.
I try to use the term "backup" to mean a data copying process whereby
older data is overwritten by newer data.
I try to use the term "archive' to mean a data copying process
whereby the copy is never modified or erased.
You're entitled to do that I suppose, but I don't suppose most other
people do. They separate the words by their meanings and purpose.
Backups are intended for use recovering information that has become
lost. Archives are places to keep information for long term storage.
So your definition of archive is correct. But your definition of backup isn't. It's perfectly reasonable to have more than one version or age
of backup, but it's also perfectly reasonable to erase them at some
chosen age or version.
It is perfectly reasonable to discuss 'the two youngest backups', IMHO.
Adding checksum file(s) to the contents burned to disc is an important step that should not be omitted
i only once in my life watched a DVD giving bad data without an SCSI
error.
Interesting. My WAG is that there was a marginal/ ambiguous dot on disc (?).
I seem to recall a post by you that indicated *BSD lacked the features
needed for good optical drive/ media/ format support.
Can I get xorriso(1) on FreeBSD?
David Christensen wrote:
Adding checksum file(s) to the contents burned to disc is an important step >> that should not be omitted
I let xorriso compute and store the checksums in a non-file block range
at the end of the ISO filesystem. Each file gets an AAIP attribute which points to an MD5 in this checksum array.
The user only has to issue the xorriso command -for_backup or -md5 "on".
Thomas Schmitt wrote:
i only once in my life watched a DVD giving bad data without an SCSI
error.
Interesting. My WAG is that there was a marginal/ ambiguous dot on disc (?).
The astonishing fact is not the damaged data chunk but the drive's failure
to recognize the damage. There are substantial parity data wrapped around each DVD "ECC Block" which the drive may use for error detection and
possibly for correction. If i count correctly in MMC-5 Figure 26, then
32 KiB payload data get added 192 * 10 + 15 * 182 = 4650 bytes of parity data. ECMA-337 (about DVD+RW) mentions in 13.1 two more checksums with
6 bytes per 2048 bytes of payload data.
With such much of redundancy it is highly unlikely that an alteration
stays undetected or that an error correction yields a wrong result. Nevertheless in this special combination of medium and drive the error was not reported or corrected by the drive but rather a wrong data chunk was handed out.
At that occasion an MD5 recorded by xorriso indicated the error.
Comparing the read results on several drives showed that it was about a single ECC block of 32 KiB payload.
I seem to recall a post by you that indicated *BSD lacked the features
needed for good optical drive/ media/ format support.
I have difficulties to remember ...
Can I get xorriso(1) on FreeBSD?
I don't have a FreeBSD test machine any more. So we have to rely on the
web ... xorriso seems to be available in the versions as in Sid (1.5.6)
and Buster (1.5.0):
https://www.freshports.org/sysutils/xorriso/
GNU xorriso should compile on FreeBSD out of the box:
https://www.gnu.org/software/xorriso/#download
Older versions available on
https://ftp.gnu.org/gnu/xorriso
User experience reports are welcome.
If you want to abort earlier, do not press Ctrl+C but rather do
touch /var/opt/xorriso/do_abort_check_media
Are there tools other than xorriso(1) that can create a compatible checksum? Read the checksum?
My approach is *.md5 and *.sha256 sister files for each archive encrypted tarball file.
implementing algorithms
from standards is non-trivial; bugs are not uncommon.
Are there obstacles making implementation of proper SIGINT and
SIGTERM signals handler prohibitively difficult? Ctrl+C is a common part of UI familiar to the most of users. There are should be serious reasons if it is necessary to teach them to touch an application specific file.
Greetings all;
As usual, the man page may as well be written in swahili. The NDE
syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as
sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of these
to make a single volume that I can then rsynch /home to it, then switch
fstab to mount it as /home on a reboot?
On 03/11/23 at 17:27, gene heskett wrote:
Greetings all;
As usual, the man page may as well be written in swahili. The NDE
syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated
as sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of
these to make a single volume that I can then rsynch /home to it, then
switch fstab to mount it as /home on a reboot?
How about to use debian-installer: burn the dvd image of Bookworm 12.2,
put into the DVD drive then reboot the system. You have to choose
"Expert Install" and it's all menu driven from RAID device creation to
LVM logical device and logical volume names.
I don't know if you can do that from debian-installer rescue disk mode.
HTH
kinds regards
On 03/11/23 at 17:27, gene heskett wrote:
Greetings all;
As usual, the man page may as well be written in swahili. The NDE syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of these to make a single volume that I can then rsynch /home to it, then switch fstab to mount it as /home on a reboot?
How about to use debian-installer: burn the dvd image of Bookworm 12.2, put into the DVD drive then reboot the system. You have to choose "Expert Install" and it's all menu driven from RAID device creation to LVM logical device and logical volume names.
I don't know if you can do that from debian-installer rescue disk mode.
HTH
kinds regards
On 11/6/23 08:47, Franco Martelli wrote:
On 03/11/23 at 17:27, gene heskett wrote:
Greetings all;
As usual, the man page may as well be written in swahili. The NDE
syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated
as sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2. >>> Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of
these to make a single volume that I can then rsynch /home to it,
then switch fstab to mount it as /home on a reboot?
You do not put a file system on the partitions you are using as LVM
physical volumes. And you do not mount them.
The rough procedure is
Create LVM physical volumes on raw disk partitions using pvcreate (or
lvm pvcreate) e. g.,
pvcreate /dev/sdc1
pvcreate /dev/sdk1
This gives you two physical volumes to use to create one or two volume
groups
Create an LVM volume group using vgcreate (or lvm vgcreate), e. g.,
vgcreate home-vg sdc1 sdk1
This gives you a volume group named "home-vg" with 4.4 TB raw storage in which you can create one or more logical volumes.
Create the logical volumes you want. It appears you want only one, to
mounted at /home. For instance,
lvcreate --size 1024G -n home-volume home-vg
will create a 1 TB logical volume, represented under dev by /dev/home-vg/home-volume
Put a file system on the logical volume in the normal way, such as:
mkfs -t ext4 /dev/home-vg/home-volume
mount the new volume (and put it in /etc/fstab for mounting at boot:
mount /dev/home-vg/home-volume /home
Doing this probably will not give you what you want. (For instance, if I remember right, the entire logical volume would, in this case, wind up
on the first-named physical volume vgcreate command.) The man pages for
lvm and its subcommands offer a lot of options for things like storage allocation between/among multiple physical volumes that make up a volume group, the size of allocation units, such things as RAID level, and a
large number of other properties. You probably know what you want, and
from what I've seen on this list seem quite able to fish it up out of
the man pages, some of which have usefully suggestive examples.
OTOH, I would recommend ZFS for this based on experience with LVM and
ZFS in both commercial (e. g., HP-UX and SolaRIS) and Linux
environments. Both have learning curves that I would judge comparable,
both are flexible and fairly easy to manage, and both are or can be
highly resilient. On the whole, though, I prefer ZFS.
Regards,
Tom Dial
How about to use debian-installer: burn the dvd image of Bookworm
12.2, put into the DVD drive then reboot the system. You have to
choose "Expert Install" and it's all menu driven from RAID device
creation to LVM logical device and logical volume names.
I don't know if you can do that from debian-installer rescue disk mode.
HTH
kinds regards
.
On 11/7/23 18:42, Tom Dial wrote:
What do I do if a gpt partition table has already been made and an ext4 system is already installed? IOW just how "bare" a disk is needed? IsOn 03/11/23 at 17:27, gene heskett wrote:
I have those 2 2T SSD's with a gpt partition table on both,
allocated as sdc1 and sdk1, formatted to ext4, named and labeled as
lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of
these to make a single volume that I can then rsynch /home to it,
then switch fstab to mount it as /home on a reboot?
You do not put a file system on the partitions you are using as LVM
physical volumes. And you do not mount them.
writing a null gpt sufficient?
What do I do if a gpt partition table has already been made and an ext4 system is already installed? IOW just how "bare" a disk is needed? Is
writing a null gpt sufficient?
On Tue, Nov 07, 2023 at 07:19:40PM -0500, gene heskett wrote:
[...]
What do I do if a gpt partition table has already been made and an ext4
system is already installed? IOW just how "bare" a disk is needed? Is
writing a null gpt sufficient?
Hm. I may have missed something, but I've got the impression we are a
second round through this: you just slap the LVM infrastructure over
current data, it will overwrite what it needs to and mark the rest as
free space. It just replaces what was before on disk.
It /might/ warn you that you're about overwriting potentially valuable
data, I don't remember.
Unless, of course, the data is sensitive: in that case you want to zero
(or better: random) it.
Cheers
On 11/8/23 00:34, tomas@tuxteam.de wrote:
On Tue, Nov 07, 2023 at 07:19:40PM -0500, gene heskett wrote:
Sounds good.
However I may go a different route. I have a not installed 2T WD-Black SN770 NVMe SSD, format 2280. This Asus prime z370-A II modo has two M2 sockets which the docs say both can use a 2280, but they operate differently w/o really explaining the difference. The one in the middle of the board, the A socket 2_2 looks like I have to pull the CPU and its radiator to be able to really get to it, and actually only shows how to install in the lower 2_1 socket which also has a heat sinking cover that must be removed & reinstalled. Is this then the preferred location, or is there an advantage
to the other socket nearer the CPU?.
They are empty except for the ext4 install and if pvcreate just slams the
new format regardless, I'll rsync the 2T /home back to the raid10, and
unplug that controller before I put the install dvd in. I also have another sata controller, this one with all 16 ports installed.
And I just looked at tht pair, and acc gparted they have both been
pvcreated, so I'll leave then alone and steal the dvd cable, puttin a new 2T drive if I can rig power to it.
This mobo also claims to be able to do the intel version of a raid on its
own sata ports. Does anyone here have experience doing that?
Thanks Tomas
Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
- Louis D. Brandeis
Double check - sometimes one socket may be intended primarily for "other"
M2 devices. There shouldn't be any particular difference
between the two - one is obviously easier to reach than the other. Occasionally, having two may mean that they run slightly slower.
On 11/7/23 18:42, Tom Dial wrote:Hi Gene,
What do I do if a gpt partition table has already been made and an ext4 system is already installed? IOW just how "bare" a disk is needed? Is writing a null gpt sufficient?
On 11/6/23 08:47, Franco Martelli wrote:
On 03/11/23 at 17:27, gene heskett wrote:
Greetings all;
As usual, the man page may as well be written in swahili. The NDE syndrome, meaning No D-----d Examples.
I have those 2 2T SSD's with a gpt partition table on both, allocated as sdc1 and sdk1, formatted to ext4, named and labeled as lvm1 and lvm2.
Temp mounted as sdc1 and sdk1 to /mnt/lvm1 and /mnt/lvm2
How do I create a single managed volume of labels lvm1 and lvm2 of these to make a single volume that I can then rsynch /home to it, then switch fstab to mount it as /home on a reboot?
You do not put a file system on the partitions you are using as LVM physical volumes. And you do not mount them.
Thank you Tom.of options for things like storage allocation between/among multiple physical volumes that make up a volume group, the size of allocation units, such things as RAID level, and a large number of other properties. You probably know what you want, and from
The rough procedure is
Create LVM physical volumes on raw disk partitions using pvcreate (or lvm pvcreate) e. g.,
pvcreate /dev/sdc1
pvcreate /dev/sdk1
This gives you two physical volumes to use to create one or two volume groups
Create an LVM volume group using vgcreate (or lvm vgcreate), e. g.,
vgcreate home-vg sdc1 sdk1
This gives you a volume group named "home-vg" with 4.4 TB raw storage in which you can create one or more logical volumes.
Create the logical volumes you want. It appears you want only one, to mounted at /home. For instance,
lvcreate --size 1024G -n home-volume home-vg
will create a 1 TB logical volume, represented under dev by /dev/home-vg/home-volume
Put a file system on the logical volume in the normal way, such as:
mkfs -t ext4 /dev/home-vg/home-volume
mount the new volume (and put it in /etc/fstab for mounting at boot:
mount /dev/home-vg/home-volume /home
Doing this probably will not give you what you want. (For instance, if I remember right, the entire logical volume would, in this case, wind up on the first-named physical volume vgcreate command.) The man pages for lvm and its subcommands offer a lot
both are or can be highly resilient. On the whole, though, I prefer ZFS.
OTOH, I would recommend ZFS for this based on experience with LVM and ZFS in both commercial (e. g., HP-UX and SolaRIS) and Linux environments. Both have learning curves that I would judge comparable, both are flexible and fairly easy to manage, and
names.
Regards,
Tom Dial
How about to use debian-installer: burn the dvd image of Bookworm 12.2, put into the DVD drive then reboot the system. You have to choose "Expert Install" and it's all menu driven from RAID device creation to LVM logical device and logical volume
I don't know if you can do that from debian-installer rescue disk mode.
HTH
kinds regards
.
Cheers, Gene Heskett.
On 11/8/23 00:34, tomas@tuxteam.de wrote:difference. The one in the middle of the board, the A socket 2_2 looks like I have to pull the CPU and its radiator to be able to really get to it, and actually only shows how to install in the lower 2_1 socket which also has a heat sinking cover that
On Tue, Nov 07, 2023 at 07:19:40PM -0500, gene heskett wrote:Sounds good.
[...]
What do I do if a gpt partition table has already been made and an ext4
system is already installed? IOW just how "bare" a disk is needed? Is
writing a null gpt sufficient?
Hm. I may have missed something, but I've got the impression we are a
second round through this: you just slap the LVM infrastructure over
current data, it will overwrite what it needs to and mark the rest as
free space. It just replaces what was before on disk.
However I may go a different route. I have a not installed 2T WD-Black SN770 NVMe SSD, format 2280. This Asus prime z370-A II modo has two M2 sockets which the docs say both can use a 2280, but they operate differently w/o really explaining the
It /might/ warn you that you're about overwriting potentially valuable
data, I don't remember.
But before I do yet another reinstall, 24th or so. two of the sata 2t's are installed, and I'm tempted to rsych the raid to one of them to see if reassigning /home to a copy of /home does away with this horrible lag I'm wanting to blame on the raid10.all 16 ports installed.>
Unless, of course, the data is sensitive: in that case you want to zero
(or better: random) it.
Cheers
Thank you.
They are empty except for the ext4 install and if pvcreate just slams the new format regardless, I'll rsync the 2T /home back to the raid10, and unplug that controller before I put the install dvd in. I also have another sata controller, this one with
And I just looked at tht pair, and acc gparted they have both been pvcreated, so I'll leave then alone and steal the dvd cable, puttin a new 2T drive if I can rig power to it.
This mobo also claims to be able to do the intel version of a raid on its own sata ports. Does anyone here have experience doing that?
Thanks Tomas
Cheers, Gene Heskett.
On 11/7/23 17:19, gene heskett wrote:
What do I do if a gpt partition table has already been made and
an ext4 system is already installed? IOW just how "bare" a disk
is needed? Is writing a null gpt sufficient?
You can ignore them, or not, as you like. If you want, you could
overwrite them with zeros or a pattern of your choice; I would not
bother.
Hello,
On Wed, Nov 08, 2023 at 05:19:01PM -0700, Tom Dial wrote:
On 11/7/23 17:19, gene heskett wrote:
What do I do if a gpt partition table has already been made and
an ext4 system is already installed? IOW just how "bare" a disk
is needed? Is writing a null gpt sufficient?
You can ignore them, or not, as you like. If you want, you could
overwrite them with zeros or a pattern of your choice; I would not
bother.
You do need to be very careful if you have put a GPT label on a
device and then incompletely wiped the device in order to use it for something else that doesn't involve a GPT label.
The reason for that is, there are several motherboards (EFI
firmwares?) out there that consider a missing GPT label with a
backup GPT present to be an indication that the device is corrupt.
They then helpfully copy the backup GPT back to the start of the
disk, corrupting any data that was there already.
Example:
https://news.ycombinator.com/item?id=18541493
"wipefs -a /dev/sda" shold clear the GPT without having to write to
the entire device.
Thanks,
Andy
But before I do yet another reinstall, 24th or so. two of the sata 2t's
are installed, and I'm tempted to rsych the raid to one of them to see
if reassigning /home to a copy of /home does away with this horrible lag
I'm wanting to blame on the raid10.
They are empty except for the ext4 install and if pvcreate just slams
the new format regardless, I'll rsync the 2T /home back to the raid10,
and unplug that controller before I put the install dvd in. I also have another sata controller, this one with all 16 ports installed.
And I just looked at tht pair, and acc gparted they have both been
pvcreated, so I'll leave then alone and steal the dvd cable, puttin a
new 2T drive if I can rig power to it.
This mobo also claims to be able to do the intel version of a raid on
its own sata ports. Does anyone here have experience doing that?
On 11/8/23 02:20, gene heskett wrote:
But before I do yet another reinstall, 24th or so. two of the sata
2t's are installed, and I'm tempted to rsych the raid to one of them
to see if reassigning /home to a copy of /home does away with this
horrible lag I'm wanting to blame on the raid10.
Testing your applications for file system issues using a single non-RAID device with one partition and an ext4 file system is a good
trouble-shooting technique. But, see my next comment.
They are empty except for the ext4 install and if pvcreate just slams
the new format regardless, I'll rsync the 2T /home back to the raid10,
and unplug that controller before I put the install dvd in. I also
have another sata controller, this one with all 16 ports installed.
And I just looked at tht pair, and acc gparted they have both been
pvcreated, so I'll leave then alone and steal the dvd cable, puttin a
new 2T drive if I can rig power to it.
As I previously suggested, and as you previously seemed agreeable to, I
think you should stop working on the Asus and build a backup server.
Good disaster preparedness expedites system operations, maintenance, and change -- because you can take risky steps and recover if those steps fail.
This mobo also claims to be able to do the intel version of a raid on
its own sata ports. Does anyone here have experience doing that?
Yes, but I prefer software RAID -- because I can move the disks to
another computer with different hardware and the arrays will still work.
 Hardware RAID typically requires compatible hardware.
DavidUnderstood before hand. Thanks David, take care & stay well.
.
Indeed, technically-inclined people are often better served with Free
Software, and Free Software can also be a great choice for large
corporations who can either have on-site techsupport people or can hire
external support, but it is a lot more difficult to find commercial
support for merely non-techie user. This is mostly the domain of
proprietary software :-(
The way out of this is having strong local user groups, which is,
of course, easier in densely populated areas.
Indeed, technically-inclined people are often better served with Free
Software, and Free Software can also be a great choice for large
corporations who can either have on-site techsupport people or can hire
external support, but it is a lot more difficult to find commercial
support for merely non-techie user. This is mostly the domain of
proprietary software :-(
The way out of this is having strong local user groups, which is,
of course, easier in densely populated areas.
I think this still only covers a small fraction of the problem.
It just lowers the bar of the "technically-inclined" limit.
I think many more people just want to have someone they can call on
the phone to help them get through their yearly technical problem.
In my experience I get much better support from the user community of
an open source product then I get from paid support of a commercial
product. Frequently I know more about the product than the person I am dealing with.
I think this still only covers a small fraction of the problem. It
just lowers the bar of the "technically-inclined" limit. I think many
more people just want to have someone they can call on the phone to
help them get through their yearly technical problem.
Stefan Monnier writes:
I think this still only covers a small fraction of the problem. It
just lowers the bar of the "technically-inclined" limit. I think many
more people just want to have someone they can call on the phone to
help them get through their yearly technical problem.
I think that to most people their "devices" (cellphone, desktop,
whatever) are appliances. They have no more interest in learning about
the internals of those than in the internals of their washing machines.
In my experience I get much better support from the user community of
an open source product then I get from paid support of a commercial product. Frequently I know more about the product than the person I am dealing with.
Same for me. But I suspect we're in the minority.
But yes, in a way convenience can drown out freedom. See that other
thread in this mailing list about mail providers. All people flocking
to gmail although it's clear that Google would like to kill mail
as we know it.
--
t
On Mon, Nov 13, 2023, 12:35 PM <tomas@tuxteam.de> wrote:
But yes, in a way convenience can drown out freedom [...]
But mail as "they" know it has nothing to do with transport or networking. They know it as a service not as anything else. Like electricity. The "freedom" to exchange email is what matters to them.
Just about everyone in the developed countries permits and is ok with their electric/telecom/heating service coming from a monopoly, oligoploy, or government-owned entity. So the same situation for email is ok with them as long as the cost is low.
On Mon, Nov 13, 2023, 12:35 PM <tomas@tuxteam.de> wrote:
But yes, in a way convenience can drown out freedom. See that other
thread in this mailing list about mail providers. All people flocking
to gmail although it's clear that Google would like to kill mail
as we know it.
But mail as "they" know it has nothing to do with transport or
networking. They know it as a service not as anything else.
Like electricity. The "freedom" to exchange email is what
matters to them.
Just about everyone in the developed countries permits and is ok
with their electric/telecom/heating service coming from a monopoly, oligoploy, or government-owned entity. So the same situation for
email is ok with them as long as the cost is low.
But mail as "they" know it has nothing to do with transport orEspecially if they can control that freedom.
networking. They know it as a service not as anything else.
Like electricity. The "freedom" to exchange email is what
matters to them.
Just about everyone in the developed countries permits and is okThe difference with utilities like electricity is that they are
with their electric/telecom/heating service coming from a monopoly,
oligoploy, or government-owned entity. So the same situation for
email is ok with them as long as the cost is low.
_regulated_ monopolies.
There is at least a bit of government oversight to make sure the
electricity provider doesn't gouge its subscribers too badly.
In Canada they're threatening to cut off news feeds in retaliation for
the government's attempts to make them pay news providers for the data they're redistributing. Most people are too ignorant to realize that
this is an idle threat
"You get what you settle for."
-- Thelma and Louise
"You get what you settle for."
-- Thelma and Louise
I settled for Debian. Worked out OK 'til now.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 307 |
Nodes: | 16 (2 / 14) |
Uptime: | 104:29:26 |
Calls: | 6,851 |
Calls today: | 2 |
Files: | 12,355 |
Messages: | 5,415,625 |