• f3tools vs Silicon Power 4T drive

    From gene heskett@21:1/5 to All on Wed Feb 14 23:10:02 2024
    Drive is plugged into amobo usb-3 port via a startech USB3S2SAT3CB
    ADAPTER CABLE.

    f3probe took over 16 seconds, but says it the real thing:
    root@coyote:~# f3probe /dev/sdc
    F3 probe 8.0
    Copyright (C) 2010 Digirati Internet LTDA.
    This is free software; see the source for copying conditions.

    WARNING: Probing normally takes from a few seconds to 15 minutes, but
    it can take longer. Please be patient.

    Probe finished, recovering blocks... Done

    Good news: The device `/dev/sdc' is the real thing

    Device geometry:
    *Usable* size: 3.64 TB (7814037168 blocks)
    Announced size: 3.64 TB (7814037168 blocks)
    Module: 4.00 TB (2^42 Bytes)
    Approximate cache size: 0.00 Byte (0 blocks), need-reset=no
    Physical block size: 512.00 Byte (2^9 Bytes)

    Probe time: 16.04s

    2nd drive is a CC of first. So in hex, those two should yield 7.28T of storage..

    I have made 1 full partiton om each one, a labeled those partitions as
    SiPwr_0 and SiPwr_1
    I have not attempted to do anything else until the hdwe is fully
    assembled. My only question it will those partition names survive
    lvcreating an 11T lvm out of these and 2 more 2T gigastones.

    Thanks for any advice since I have not dealt with an lvm in about 15+
    years trying it once when it first came out with a high disaster rating
    then. This time the experiment will be on something expendable in its
    early days.

    Thank you all.

    Take care, stay warm and well.

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Thu Feb 15 01:50:01 2024
    Hi,

    On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
    I have made 1 full partiton om each one, a labeled those partitions as SiPwr_0 and SiPwr_1

    Please show us the command you used¹ to do that, so we know what
    exactly you are talking about, because as previously discussed
    there's a lot of different things that you like to call "partition
    labels".

    If we take that literally that would be a GPT partition name, but
    you've used this same terminology before and meant a filesystem
    label.

    My only question it will those partition names survive lvcreating an 11T lvm out of these and 2 more 2T gigastones.

    Assuming you meant partition name the first time as well, nothing
    you do other than a disk wipe or re-name should alter those
    partition names.

    But your chosen partition names don't make a lot of sense to me.
    You've picked names based on the type/manufacturer of device so you
    may as well have just used the names from /dev/disk/by-id/… which
    already have that information and are already never going to change.
    I don't know why you want to complicate matters.

    If instead you put filesystems on these partitions and labelled
    *those*, well, no, LVM goes under filesystems so those filesystems
    and their labels (and contents) are not long for this world.

    I have not dealt with an lvm in about 15+ years trying it once
    when it first came out with a high disaster rating then.

    I hope you are putting a level of redundancy under that LVM or are
    using the redundancy features of LVM (which you need to go out of
    your way to do). Otherwise by default what you'll have is not
    redundant and a device failure will lose at least the contents of
    that device, possibly more.

    Regards,
    Andy

    ¹ and while you are there, maybe a post-it note with "I will show
    the exact command I used any time I write to debian-user" stuck to
    the top of the display of the screen you use to compose emails
    would help, because basically every thread you post here lacks
    that information.

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Andy Smith on Thu Feb 15 03:10:01 2024
    On 2/14/24 19:48, Andy Smith wrote:
    Hi,

    On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
    I have made 1 full partiton om each one, a labeled those partitions as
    SiPwr_0 and SiPwr_1

    Please show us the command you used¹ to do that, so we know what
    exactly you are talking about, because as previously discussed
    there's a lot of different things that you like to call "partition
    labels".

    If we take that literally that would be a GPT partition name, but
    you've used this same terminology before and meant a filesystem
    label.

    My only question it will those partition names survive lvcreating an 11T lvm >> out of these and 2 more 2T gigastones.

    Assuming you meant partition name the first time as well, nothing
    you do other than a disk wipe or re-name should alter those
    partition names.

    But your chosen partition names don't make a lot of sense to me.
    You've picked names based on the type/manufacturer of device so you
    may as well have just used the names from /dev/disk/by-id/… which
    already have that information and are already never going to change.
    I don't know why you want to complicate matters.

    Will the by-id string fit in the space reserved for a label?That IF
    there was a connection between the /dev/sdc that udev assigns and
    anything in this list:

    root@coyote:~# ls /dev/disk/by-id
    ata-ATAPI_iHAS424_B_3524253_327133504865 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part1 wwn-0x5002538f413394a5 ata-Gigastone_SSD_GST02TBG221146 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part2
    wwn-0x5002538f413394a5-part1
    ata-Gigastone_SSD_GST02TBG221146-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part3
    wwn-0x5002538f413394a5-part2
    ata-Gigastone_SSD_GSTD02TB230102
    ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V
    wwn-0x5002538f413394a5-part3
    ata-Gigastone_SSD_GSTD02TB230102-part1 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part1 wwn-0x5002538f413394a9 ata-Gigastone_SSD_GSTG02TB230206 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part2
    wwn-0x5002538f413394a9-part1
    ata-Gigastone_SSD_GSTG02TB230206-part1 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part3
    wwn-0x5002538f413394a9-part2
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T ata-SPCC_Solid_State_Disk_AA231107S304KG00080
    wwn-0x5002538f413394a9-part3
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part1 ata-SPCC_Solid_State_Disk_AA231107S304KG00080-part1 wwn-0x5002538f413394ae ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part2 md-name-coyote:0
    wwn-0x5002538f413394ae-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part3
    md-name-coyote:0-part1
    wwn-0x5002538f413394ae-part2
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E md-name-coyote:2
    wwn-0x5002538f413394ae-part3 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part1 md-name-_none_:1
    wwn-0x5002538f413394b0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part2 md-uuid-3d5a3621:c0e32c8a:e3f7ebb3:318edbfb
    wwn-0x5002538f413394b0-part1
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part3 md-uuid-3d5a3621:c0e32c8a:e3f7ebb3:318edbfb-part1
    wwn-0x5002538f413394b0-part2
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V md-uuid-57a88605:27f5a773:5be347c1:7c5e7342
    wwn-0x5002538f413394b0-part3
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part1 md-uuid-bb6e03ce:19d290c8:5171004f:0127a392 wwn-0x5002538f42205e8e ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part2 usb-SPCC_Sol_id_State_Disk_1234567897E6-0:0
    wwn-0x5002538f42205e8e-part1
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part3 usb-SPCC_Sol_id_State_Disk_1234567897E6-0:0-part1
    wwn-0x5002538f42205e8e-part2
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W usb-USB_Mass_Storage_Device_816820130806-0:0
    wwn-0x5002538f42205e8e-part3
    root@coyote:~#

    I dare you to find the disk that udev calls sdc in the above wall of text.

    Why can't you understand that I want a unique label for all of this
    stuff that is NOT a wall of HEX numbers no one can remember. Its not
    mounted, so blkid does NOT see it.

    If instead you put filesystems on these partitions and labelled
    *those*, well, no, LVM goes under filesystems so those filesystems
    and their labels (and contents) are not long for this world.

    I have not dealt with an lvm in about 15+ years trying it once
    when it first came out with a high disaster rating then.

    I hope you are putting a level of redundancy under that LVM or are
    using the redundancy features of LVM (which you need to go out of
    your way to do). Otherwise by default what you'll have is not
    redundant and a device failure will lose at least the contents of
    that device, possibly more.

    Regards,
    Andy

    ¹ and while you are there, maybe a post-it note with "I will show
    the exact command I used any time I write to debian-user" stuck to
    the top of the display of the screen you use to compose emails
    would help, because basically every thread you post here lacks
    that information.


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to gene heskett on Thu Feb 15 03:20:01 2024
    On 2/14/24 17:48, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
    I have made 1 full partiton om each one, a labeled those partitions  as >>> SiPwr_0 and SiPwr_1

    Please show us the command you used¹ to do that, so we know what
    exactly you are talking about, because as previously discussed
    there's a lot of different things that you like to call "partition
    labels".

    This is what gparted calls a "partition label" and certainly does not
    need a 4.5 megabyte camera image to see. or even a 50k screen snap.
    Taking this screenshot was a pita, because the gparted window disappears behind the terminal screen when you click on take another shot, so you
    have to quit, then find the gparted on the tool bar to bring it back to
    the front, then move it and the terminal so its not totally hidden. Then rerun spectacle again waste a click bringing it fwd, then 30 seconds
    later the spectacal instructions finally show up and after 5 minutes of screwing around, finally get the screen shot attached to prove I'm not lieing.


    The easy and accurate answer is to use a root console, fdisk(8) with --list-details, select the console session, and paste into a mail reply:

    2024-02-14 18:09:26 root@taz ~
    # fdisk --list-details /dev/sda
    Disk /dev/sda: 55.9 GiB, 60022480896 bytes, 117231408 sectors
    Disk model: INTEL SSDSC2CW06
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 816CF78F-AFAD-4F70-AAA0-B08C6CE95AE7
    First LBA: 34
    Last LBA: 117231374
    Alternative LBA: 117231407
    Partition entries LBA: 2
    Allocated partition entries: 128

    Device Start End Sectors Type-UUID
    UUID Name Attrs
    /dev/sda1 2048 1953791 1951744
    C12A7328-F81F-11D2-BA4B-00A0C93EC93B
    5A1358F4-23A2-4CF6-A4E2-0A30A0FFC904 ESP
    /dev/sda2 1953792 3907583 1953792
    0FC63DAF-8483-4772-8E79-3D69D8477DE4
    B429D984-E32D-4BAE-A7AE-137168B0F0F3 taz_boot
    /dev/sda3 3907584 5861375 1953792
    0FC63DAF-8483-4772-8E79-3D69D8477DE4
    83862E6A-7B89-4AB9-A21D-BAEF3AD0F7A3 taz_swap_crypt
    /dev/sda4 5861376 29298687 23437312
    0FC63DAF-8483-4772-8E79-3D69D8477DE4
    2A708FD7-F6EE-49D7-8E23-65905BCD6512 taz_root_crypt
    /dev/sda5 29298688 117229567 87930880
    0FC63DAF-8483-4772-8E79-3D69D8477DE4
    B8468EA2-B66D-4D13-9FD1-E46AEDA58067 taz_scratch_crypt


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to gene heskett on Thu Feb 15 03:30:01 2024
    On 2/14/24 18:06, gene heskett wrote:
    Will the by-id string fit in the space reserved for a label?That IF
    there was a connection between the /dev/sdc that udev assigns and
    anything in this list:

    root@coyote:~# ls /dev/disk/by-id
    ata-ATAPI_iHAS424_B_3524253_327133504865 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part1    wwn-0x5002538f413394a5
    ata-Gigastone_SSD_GST02TBG221146 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part2 wwn-0x5002538f413394a5-part1
    ata-Gigastone_SSD_GST02TBG221146-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part3 wwn-0x5002538f413394a5-part2
    ata-Gigastone_SSD_GSTD02TB230102
    ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V wwn-0x5002538f413394a5-part3 ata-Gigastone_SSD_GSTD02TB230102-part1 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part1    wwn-0x5002538f413394a9
    ata-Gigastone_SSD_GSTG02TB230206 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part2 wwn-0x5002538f413394a9-part1
    ata-Gigastone_SSD_GSTG02TB230206-part1 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part3 wwn-0x5002538f413394a9-part2
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T ata-SPCC_Solid_State_Disk_AA231107S304KG00080 wwn-0x5002538f413394a9-part3 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part1 ata-SPCC_Solid_State_Disk_AA231107S304KG00080-part1  wwn-0x5002538f413394ae ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part2  md-name-coyote:0
                                   wwn-0x5002538f413394ae-part1
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part3 md-name-coyote:0-part1 wwn-0x5002538f413394ae-part2 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E        md-name-coyote:2
                                   wwn-0x5002538f413394ae-part3
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part1  md-name-_none_:1
                                   wwn-0x5002538f413394b0
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part2 md-uuid-3d5a3621:c0e32c8a:e3f7ebb3:318edbfb wwn-0x5002538f413394b0-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part3 md-uuid-3d5a3621:c0e32c8a:e3f7ebb3:318edbfb-part1 wwn-0x5002538f413394b0-part2
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V md-uuid-57a88605:27f5a773:5be347c1:7c5e7342 wwn-0x5002538f413394b0-part3 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part1 md-uuid-bb6e03ce:19d290c8:5171004f:0127a392          wwn-0x5002538f42205e8e
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part2 usb-SPCC_Sol_id_State_Disk_1234567897E6-0:0 wwn-0x5002538f42205e8e-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part3 usb-SPCC_Sol_id_State_Disk_1234567897E6-0:0-part1 wwn-0x5002538f42205e8e-part2
    ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W usb-USB_Mass_Storage_Device_816820130806-0:0 wwn-0x5002538f42205e8e-part3 root@coyote:~#

    I dare you to find the disk that udev calls sdc in the above wall of text.

    Why can't you understand that I want a unique label for all of this
    stuff that is NOT a wall of HEX numbers no one can remember.  Its not mounted, so blkid does NOT see it.


    For labeled disk partitions, use /dev/disk/by-label/* paths:

    2024-02-14 18:22:34 root@taz ~
    # ls -1 /dev/disk/by-label/
    sda3_crypt
    taz_boot
    taz_root


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to gene heskett on Thu Feb 15 04:00:02 2024
    On 2/14/24 20:49, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    Hi,

    On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
    I have made 1 full partiton om each one, a labeled those partitions  as >>> SiPwr_0 and SiPwr_1

    Please show us the command you used¹ to do that, so we know what
    exactly you are talking about, because as previously discussed
    there's a lot of different things that you like to call "partition
    labels".

    This is what gparted calls a "partition label" and certainly does not
    need a 4.5 megabyte camera image to see. or even a 50k screen snap.
    Taking this screenshot was a pita, because the gparted window disappears behind the terminal screen when you click on take another shot, so you
    have to quit, then find the gparted on the tool bar to bring it back to
    the front, then move it and the terminal so its not totally hidden. Then rerun spectacle again waste a click bringing it fwd, then 30 seconds
    later the spectacal instructions finally show up and after 5 minutes of screwing around, finally get the screen shot attached to prove I'm not lieing.

    If we take that literally that would be a GPT partition name, but
    you've used this same terminology before and meant a filesystem
    label.

    My only question it will those partition names survive lvcreating an
    11T lvm
    out of these and 2 more 2T gigastones.

    Assuming you meant partition name the first time as well, nothing
    you do other than a disk wipe or re-name should alter those
    partition names.

    But your chosen partition names don't make a lot of sense to me.
    You've picked names based on the type/manufacturer of device so you
    may as well have just used the names from /dev/disk/by-id/… which
    already have that information and are already never going to change.
    I don't know why you want to complicate matters.

    If instead you put filesystems on these partitions and labelled
    *those*, well, no, LVM goes under filesystems so those filesystems
    and their labels (and contents) are not long for this world.

    I have not dealt with an lvm in about 15+ years trying it once
    when it first came out with a high disaster rating then.

    I hope you are putting a level of redundancy under that LVM or are
    using the redundancy features of LVM (which you need to go out of
    your way to do). Otherwise by default what you'll have is not
    redundant and a device failure will lose at least the contents of
    that device, possibly more.

    You pique my curiosity because this is going to be my backup system, but
    not a syllable about how to do it. You tell me its fine 3 paragraphs up.
    then tell me lvcreate will wipe it out. I'm asking for answers, not
    more connumdrums..
    Regards,
    Andy

    ¹ and while you are there, maybe a post-it note with "I will show
       the exact command I used any time I write to debian-user" stuck to
       the top of the display of the screen you use to compose emails
       would help, because basically every thread you post here lacks
       that information.


    Cheers, Gene Heskett, CET.

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Max Nikulin on Thu Feb 15 04:10:01 2024
    On 2/14/24 21:14, Max Nikulin wrote:
    On 15/02/2024 08:48, gene heskett wrote:
    This is what gparted calls a "partition label" and certainly does not
    need a 4.5 megabyte camera image to see. or even a 50k screen snap.

    lsblk --fs -o +PARTLABEL  /dev/sdc

    NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS PARTLABEL
    sdc

    └─sdc1 ext4 1.0 SiPwr_1 70bfe832-38b1-46ed-85f4-33cf473185bb
    .

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Thu Feb 15 17:00:01 2024
    Hi,

    On Wed, Feb 14, 2024 at 09:06:43PM -0500, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    But your chosen partition names don't make a lot of sense to me.
    You've picked names based on the type/manufacturer of device so you
    may as well have just used the names from /dev/disk/by-id/… which
    already have that information and are already never going to change.
    I don't know why you want to complicate matters.

    Will the by-id string fit in the space reserved for a label?

    I doubt it, but what would be the point of doing that? The device ID
    conveys all the same information that you're putting in the
    partition name.

    I dare you to find the disk that udev calls sdc in the above wall of text.

    $ ls -l /dev/disk/by-id | grep sdb1
    lrwxrwxrwx 1 root root 10 Jan 17 02:49 ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAGA00863-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 Jan 17 02:49 wwn-0x5002538c00066800-part1 -> ../../sdb1

    Thus, partition 1 of sdb1 is on partition 1 of /dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAGA00863.
    Information already held by the kernel; no need to duplicate it in a
    GPT partition name or anywhere else.

    There are many other ways to retrieve the same information; that was
    the first that sprang to mind but I would not use that in a script
    because it's basically parsing ls (a big no-no).

    If you'd simply state what you're trying to achieve then 99.9% of
    all your posts wouldn't be massive X/Y problems.

    Why can't you understand that I want a unique label for all of this stuff that is NOT a wall of HEX numbers no one can remember. Its not mounted, so blkid does NOT see it.

    See above. You're welcome.

    I note that you still haven't responded with the exact command you
    used to set these "labels", so at this point we still do not know
    exactly what you mean and I have to proceed assuming you meant GPT
    partition name. A simple request that would enable us to help you
    better, ignored.

    Regards,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Thu Feb 15 17:10:01 2024
    Hi,

    On Wed, Feb 14, 2024 at 08:48:31PM -0500, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
    I have made 1 full partiton om each one, a labeled those partitions as SiPwr_0 and SiPwr_1

    Please show us the command you used¹ to do that, so we know what
    exactly you are talking about, because as previously discussed
    there's a lot of different things that you like to call "partition
    labels".

    This is what gparted calls a "partition label"

    Okay, thanks for clarifying. This, or preferably a copy-paste of the
    actual parted command session would suffice.

    I don't know what the relevance is of the rest of the following
    paragraph - your life story is not required and you were not accused
    of lying, just asked to clarify.

    Do remember that this mailing lists does not accept attachments (and
    very few mailing lists in general do), so any time you are tempted
    to send a photo to a mailing list it is probably an error. We did
    not see whatever it was, but it doesn't sound relevant.

    and certainly does not need a 4.5 megabyte camera image to see. or
    even a 50k screen snap. Taking this screenshot was a pita, because
    the gparted window disappears behind the terminal screen when you
    click on take another shot, so you have to quit, then find the
    gparted on the tool bar to bring it back to the front, then move
    it and the terminal so its not totally hidden. Then rerun
    spectacle again waste a click bringing it fwd, then 30 seconds
    later the spectacal instructions finally show up and after 5
    minutes of screwing around, finally get the screen shot attached
    to prove I'm not lieing.

    Regards,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Wright@21:1/5 to Andy Smith on Thu Feb 15 17:30:01 2024
    On Thu 15 Feb 2024 at 16:12:06 (+0000), Andy Smith wrote:
    On Wed, Feb 14, 2024 at 09:56:07PM -0500, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    I hope you are putting a level of redundancy under that LVM or are using the redundancy features of LVM (which you need to go out of
    your way to do). Otherwise by default what you'll have is not
    redundant and a device failure will lose at least the contents of
    that device, possibly more.

    You pique my curiosity because this is going to be my backup system, but not
    a syllable about how to do it. You tell me its fine 3 paragraphs up. then tell me lvcreate will wipe it out. I'm asking for answers, not more connumdrums..

    You've split your reply to my mail across three different emails and
    now you're replying to a part about redundancy, but asking questions
    about something completely different, all while referring to bits
    that are not proximal to where your text is, so it's unclear to me
    exactly what you are asking about.

    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem
    labels".

    To my implied question about your redundancy plans (if any), you
    then complain that I have not given you "a syllable about how to do
    it". Do *what*? I don't yet know what your plans are in that regard.
    If you have questions, ask them.

    I think the paste in
    https://lists.debian.org/debian-user/2024/02/msg00611.html
    shows that SiPwr_1 is a filesystem LABEL, not a PARTLABEL,
    lying as it does between an FSVER and a UUID.

    Cheers,
    David.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Thu Feb 15 17:20:01 2024
    Hi,

    On Wed, Feb 14, 2024 at 09:56:07PM -0500, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    I hope you are putting a level of redundancy under that LVM or are
    using the redundancy features of LVM (which you need to go out of
    your way to do). Otherwise by default what you'll have is not
    redundant and a device failure will lose at least the contents of
    that device, possibly more.

    You pique my curiosity because this is going to be my backup system, but not a syllable about how to do it. You tell me its fine 3 paragraphs up. then tell me lvcreate will wipe it out. I'm asking for answers, not more connumdrums..

    You've split your reply to my mail across three different emails and
    now you're replying to a part about redundancy, but asking questions
    about something completely different, all while referring to bits
    that are not proximal to where your text is, so it's unclear to me
    exactly what you are asking about.

    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem
    labels".

    To my implied question about your redundancy plans (if any), you
    then complain that I have not given you "a syllable about how to do
    it". Do *what*? I don't yet know what your plans are in that regard.
    If you have questions, ask them.

    Regards,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From debian-user@howorth.org.uk@21:1/5 to Andy Smith on Thu Feb 15 18:40:01 2024
    Andy Smith <andy@strugglers.net> wrote:

    On Wed, Feb 14, 2024 at 08:48:31PM -0500, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    Please show us the command you used¹ to do that, so we know what
    exactly you are talking about, because as previously discussed
    there's a lot of different things that you like to call "partition labels".

    This is what gparted calls a "partition label"

    Okay, thanks for clarifying. This, or preferably a copy-paste of the
    actual parted command session would suffice.

    I don't know what the relevance is of the rest of the following
    paragraph - your life story is not required and you were not accused
    of lying, just asked to clarify.

    Do remember that this mailing lists does not accept attachments (and
    very few mailing lists in general do), so any time you are tempted
    to send a photo to a mailing list it is probably an error. We did
    not see whatever it was, but it doesn't sound relevant.

    FWIW, the photo that Gene attached was certainly attached to the mail
    that the list sent to me, so I suppose that this list does permit
    attachments, at least in some circumstances.

    I do agree with your sentiment that the text output of a CLI command is
    both simpler and better though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to debian-user@howorth.org.uk on Thu Feb 15 20:50:01 2024
    Hello,

    On Thu, Feb 15, 2024 at 05:32:34PM +0000, debian-user@howorth.org.uk wrote:
    Andy Smith <andy@strugglers.net> wrote:
    Do remember that this mailing lists does not accept attachments (and
    very few mailing lists in general do), so any time you are tempted
    to send a photo to a mailing list it is probably an error. We did
    not see whatever it was, but it doesn't sound relevant.

    FWIW, the photo that Gene attached was certainly attached to the mail
    that the list sent to me, so I suppose that this list does permit attachments, at least in some circumstances.

    Oh yes you're right, I see it too now I've looked properly!

    So now I actually think Gene means a filesystem label?

    Sigh, this really does not need to be this difficult.

    Anyway I see that the image of gparted says there's an ext4
    filesystem there. So, Gene: when you put those partitions into LVM
    (when you make them LVM Physical Volumes) the filesystems on them
    will be trashed, and so will the filesystem labels.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Andy Smith on Thu Feb 15 21:30:01 2024
    On 2/15/24 11:21, Andy Smith wrote:
    Hi,

    On Wed, Feb 14, 2024 at 09:56:07PM -0500, gene heskett wrote:
    On 2/14/24 19:48, Andy Smith wrote:
    I hope you are putting a level of redundancy under that LVM or are
    using the redundancy features of LVM (which you need to go out of
    your way to do). Otherwise by default what you'll have is not
    redundant and a device failure will lose at least the contents of
    that device, possibly more.

    You pique my curiosity because this is going to be my backup system, but not >> a syllable about how to do it. You tell me its fine 3 paragraphs up. then
    tell me lvcreate will wipe it out. I'm asking for answers, not more
    connumdrums..

    You've split your reply to my mail across three different emails and
    now you're replying to a part about redundancy, but asking questions
    about something completely different, all while referring to bits
    that are not proximal to where your text is, so it's unclear to me
    exactly what you are asking about.

    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem
    labels".

    I'm still confused and it is not all the well clarified by looking at
    gparted, a shot of which I posted. Wikipedia seems to have the history
    but not the practice to the depth i'd like.

    I also looked at XFS on wikipedia, looks good, but I note it says then
    linux version linux is not complete. 2 more of the big Si Pwr 3.64T's
    will be here tomorrow. So I'll be inclined to put it together and see
    what I can make it do. There will no doubt be questions.

    To my implied question about your redundancy plans (if any), you
    then complain that I have not given you "a syllable about how to do
    it". Do *what*? I don't yet know what your plans are in that regard.
    If you have questions, ask them.

    Like which version of a raid is the best at tolerating a failed drive,
    which give he best balance between redundancy and capacity.

    Take care & stay well, Andy.

    Regards,
    Andy


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Thu Feb 15 21:50:01 2024
    Hi,

    On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
    On 2/15/24 11:21, Andy Smith wrote:
    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem labels".

    I'm still confused and it is not all the well clarified by looking at gparted, a shot of which I posted.

    This could all be answered easily if you'd just post the copy-paste
    of your terminal scrollback for what you actually did. Hopefully you
    don't now object to me asking what you meant since apparently even
    you do not know if you mean partition names or filesystem labels.
    From what you posted it now sounds like labels on the ext4
    filesystems that you created.

    What you're trying to do (LVM on MD RAID?) is quite complicated and
    you clearly don't have much experience in this area. That's okay but
    it does mean that you're likely to make a lot of mistakes with a
    thing that holds your data, so you need to be prepared for that.

    For example, you mentioned only as an aside that you intended to get
    two more drives and put the four of them into an LVM, but you did
    not know that this would blow away the filesystems already on the
    drives, and that this would not by itself provide you with any
    redundancy. So if you hadn't said anything and I hadn't questioned
    this, you could well have spent a lot of time creating something
    that isn't correct and needs to be torn down again, possibly with
    data loss.

    Again that's okay — we learn by experimentation — but you're going
    to have to prepare yourself for doing this over again many times.
    And I also want to reiterate that you're going to have questions,
    and that is good, but if we here on this list are not to be driven
    insane by the ambiguities and misunderstandings, please, please,
    PLEASE post logs of the commands you type on this adventure when you
    ask them.

    Please.

    If you have questions, ask them.

    Like which version of a raid is the best at tolerating a failed drive, which give he best balance between redundancy and capacity.

    This is a complex subject. Before we get into it, what are you
    trying to achieve? Like, what is your end goal with these four
    drives?

    MD RAID isn't the only way to achieve redundancy. You also haven't
    explained why you need LVM. Depending on your needs, maybe a
    filesystem with redundancy and volume management features in it
    would be better. Like btrfs or zfs.

    Given the problems you had with MD RAID in the past I still maintain
    that you'd likely be better off just getting a storage appliance of
    some kind.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Andy Smith on Thu Feb 15 22:10:01 2024
    On 2/15/24 14:41, Andy Smith wrote:
    Hello,

    On Thu, Feb 15, 2024 at 05:32:34PM +0000, debian-user@howorth.org.uk wrote:
    Andy Smith <andy@strugglers.net> wrote:
    Do remember that this mailing lists does not accept attachments (and
    very few mailing lists in general do), so any time you are tempted
    to send a photo to a mailing list it is probably an error. We did
    not see whatever it was, but it doesn't sound relevant.

    FWIW, the photo that Gene attached was certainly attached to the mail
    that the list sent to me, so I suppose that this list does permit
    attachments, at least in some circumstances.

    Oh yes you're right, I see it too now I've looked properly!

    So now I actually think Gene means a filesystem label?

    Sigh, this really does not need to be this difficult.

    Anyway I see that the image of gparted says there's an ext4
    filesystem there. So, Gene: when you put those partitions into LVM
    (when you make them LVM Physical Volumes) the filesystems on them
    will be trashed, and so will the filesystem labels.

    Which is the answer I needed. Those names I wrote with gparted WILL be
    trashed. Now the question remains howinhell do I put a label on a drive
    such that it does survive making a raid or lvm device with it? To not
    have a way to id its the drive in slot n of a multislot rack stops me in
    my tracks. Particularly with these gigastones, I 5 of them but when all
    are plugged in there are only 3 becauae there are 2 pairs of matching
    serial numbers in the by-id output, by-id sees all 5 drives, but udev
    see's only the unique serial numbers. gparted can change the devices
    blkid, getting a new one from rng so while you all think that's the
    greatest thing since bottled beer, I know better.

    Take care, stay well all.

    Thanks,
    Andy


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Wright@21:1/5 to Andy Smith on Thu Feb 15 22:30:01 2024
    On Thu 15 Feb 2024 at 20:44:52 (+0000), Andy Smith wrote:
    On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
    On 2/15/24 11:21, Andy Smith wrote:
    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem labels".

    I'm still confused and it is not all the well clarified by looking at gparted, a shot of which I posted.

    This could all be answered easily if you'd just post the copy-paste
    of your terminal scrollback for what you actually did. Hopefully you
    don't now object to me asking what you meant since apparently even
    you do not know if you mean partition names or filesystem labels.
    From what you posted it now sounds like labels on the ext4
    filesystems that you created.

    Gene effectively shoots himself in the foot by using gparted (GUI)
    instead of, say, gdisk where it's easy to paste what was done, or
    for someone, say me, to post an example:

    # gdisk /dev/sdz
    GPT fdisk (gdisk) version 1.0.3

    Partition table scan:
    MBR: not present
    BSD: not present
    APM: not present
    GPT: not present

    Creating new GPT entries.

    Command (? for help): o
    This option deletes all partitions and creates a new protective MBR.
    Proceed? (Y/N): y

    Command (? for help): p
    Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
    Model: Desktop
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 3907029134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 3907029101 sectors (1.8 TiB)

    Number Start (sector) End (sector) Size Code Name

    Command (? for help): n
    Partition number (1-128, default 1):
    First sector (34-3907029134, default = 2048) or {+-}size{KMGTP}:
    Last sector (2048-3907029134, default = 3907029134) or {+-}size{KMGTP}:
    Current type is 'Linux filesystem'
    Hex code or GUID (L to show codes, Enter = 8300):
    Changed type of partition to 'Linux filesystem'

    Command (? for help): c
    Using 1
    Enter name: Lulu01

    Command (? for help): i
    Using 1
    Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
    Partition unique GUID: 37CF9EDF-C695-428E-9889-2F52C40DFCA5
    First sector: 2048 (at 1024.0 KiB)
    Last sector: 3907029134 (at 1.8 TiB)
    Partition size: 3907027087 sectors (1.8 TiB)
    Attribute flags: 0000000000000000
    Partition name: 'Lulu01'

    Command (? for help): w

    Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    PARTITIONS!!

    Do you want to proceed? (Y/N): y
    OK; writing new GUID partition table (GPT) to /dev/sdb.
    The operation has completed successfully.
    #

    # gdisk -l /dev/sdz
    GPT fdisk (gdisk) version 1.0.3

    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present

    Found valid GPT with protective MBR; using GPT.
    Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
    Model: Desktop
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 3907029134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 2014 sectors (1007.0 KiB)

    Number Start (sector) End (sector) Size Code Name
    1 2048 3907029134 1.8 TiB 8300 Lulu01
    #

    Cheers,
    David.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Thu Feb 15 22:30:01 2024
    Hi,

    On Thu, Feb 15, 2024 at 03:59:30PM -0500, gene heskett wrote:
    Now the question remains howinhell do I put a label on a drive
    such that it does survive making a raid or lvm device with it? To
    not have a way to id its the drive in slot n of a multislot rack
    stops me in my tracks.

    Given that an MD RAID array or a LVM Logical Volume may be spread
    across many different underlying storage devices, the question
    doesn't make sense. Due to the fact that filesystems go on block
    devices, and RAID arrays and LVM LVs can be block devices, a
    filesystem label in that instance would represent possibly multiple
    underlying storage devices. So step back and tell us what are you
    actually trying to achieve, rather than insisting on your X solution
    to your Y problem.

    Suppose you have the MD array /dev/md42. What are you conceptually
    wanting to do with that in relation to labels of some kind? What
    information is it that you want?

    Support you have LVM logical volume /dev/myvg/mylv. What are you
    conceptually wanting to do with that in relation to labels of some
    kind? What information is it that you want?

    Particularly with these gigastones, I 5 of them but when all are plugged in there are only 3 becauae there are 2 pairs of matching serial numbers in the by-id output, by-id sees all 5 drives, but udev see's only the unique
    serial numbers. gparted can change the devices blkid, getting a new one from rng so while you all think that's the greatest thing since bottled beer, I know better.

    Once you explain what information you're trying to get when you
    start with an LVM or MD device, I can probably advise how to get it,
    but just to make clear: I don't think it's a good idea to continue
    to use such broken devices. We don't need to debate that since I
    know you've been posting about that a lot and clearly have decided
    to push ahead. I just think you haven't seen the end of the problems
    with that issue.

    Regards,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Andy Smith on Thu Feb 15 22:50:01 2024
    On 2/15/24 15:45, Andy Smith wrote:
    Hi,

    On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
    On 2/15/24 11:21, Andy Smith wrote:
    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem
    labels".

    I'm still confused and it is not all the well clarified by looking at
    gparted, a shot of which I posted.

    This could all be answered easily if you'd just post the copy-paste
    of your terminal scrollback for what you actually did. Hopefully you
    don't now object to me asking what you meant since apparently even
    you do not know if you mean partition names or filesystem labels.
    From what you posted it now sounds like labels on the ext4
    filesystems that you created.

    What you're trying to do (LVM on MD RAID?) is quite complicated and
    you clearly don't have much experience in this area. That's okay but
    it does mean that you're likely to make a lot of mistakes with a
    thing that holds your data, so you need to be prepared for that.

    For example, you mentioned only as an aside that you intended to get
    two more drives and put the four of them into an LVM, but you did
    not know that this would blow away the filesystems already on the
    drives, and that this would not by itself provide you with any
    redundancy. So if you hadn't said anything and I hadn't questioned
    this, you could well have spent a lot of time creating something
    that isn't correct and needs to be torn down again, possibly with
    data loss.

    That is how we learn Andy Any data I put on this stuff while testing
    as normal files will be expected to be lost. So that possibility is
    expected. Experience is how I got where I am on an 8th grade education.

    Again that's okay — we learn by experimentation — but you're going
    to have to prepare yourself for doing this over again many times.

    Expected.

    And I also want to reiterate that you're going to have questions,
    and that is good, but if we here on this list are not to be driven
    insane by the ambiguities and misunderstandings, please, please,
    PLEASE post logs of the commands you type on this adventure when you
    ask them.

    I'll try.

    Please.

    If you have questions, ask them.

    When I get it assembled. Last 2 drives s/b here tom. Then I need to shut
    down and extract 4 of the gisastones which are plugged in atm but
    unmounted, the 5th one is now my /home partition. And I am rsync'ing
    /home back to that now idle raid10 about every other day.

    Like which version of a raid is the best at tolerating a failed drive, which >> give he best balance between redundancy and capacity.

    This is a complex subject. Before we get into it, what are you
    trying to achieve? Like, what is your end goal with these four
    drives?

    MD RAID isn't the only way to achieve redundancy. You also haven't
    explained why you need LVM. Depending on your needs, maybe a
    filesystem with redundancy and volume management features in it
    would be better. Like btrfs or zfs.

    Given the problems you had with MD RAID in the past I still maintain
    that you'd likely be better off just getting a storage appliance of
    some kind.

    Thanks,
    Andy

    Thank you Andy.

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Thu Feb 15 23:40:01 2024
    Now the question remains howinhell do I put a label on a drive such
    that it does survive making a raid or lvm device with it?

    LVM/MD take control of a block device (usually a partition), so any info
    in that block device can't be used for your purpose. IOW you have to
    put the info somewhere on the disk *outside* of the partition used by
    LVM/MD.

    I can see a few different options:

    - Use some disk-specific tool to change the disk's serial numbers.
    I'm not sure how common such tools are, they're probably
    manufacturer-specific and proprietary; my intuition tells me to try
    any other way first.

    - Use partition labels and/or partition UUIDs: contrary to filesystem
    labels, these are not stored inside the block device but inside the
    partition table. They don't exist in the old MBR-style partitions,
    but they do in GPT (GUID Partition Tables).

    - Use an additional tiny dummy partition in which you can put any info
    you like.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Andy Smith on Fri Feb 16 02:50:01 2024
    On 2/15/24 15:45, Andy Smith wrote:
    Hi,

    On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
    On 2/15/24 11:21, Andy Smith wrote:
    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem
    labels".

    I'm still confused and it is not all the well clarified by looking at
    gparted, a shot of which I posted.

    This could all be answered easily if you'd just post the copy-paste
    of your terminal scrollback for what you actually did. Hopefully you
    don't now object to me asking what you meant since apparently even
    you do not know if you mean partition names or filesystem labels.
    From what you posted it now sounds like labels on the ext4
    filesystems that you created.

    What you're trying to do (LVM on MD RAID?) is quite complicated and
    you clearly don't have much experience in this area. That's okay but
    it does mean that you're likely to make a lot of mistakes with a
    thing that holds your data, so you need to be prepared for that.

    For example, you mentioned only as an aside that you intended to get
    two more drives and put the four of them into an LVM, but you did
    not know that this would blow away the filesystems already on the
    drives, and that this would not by itself provide you with any
    redundancy. So if you hadn't said anything and I hadn't questioned
    this, you could well have spent a lot of time creating something
    that isn't correct and needs to be torn down again, possibly with
    data loss.

    Again that's okay — we learn by experimentation — but you're going
    to have to prepare yourself for doing this over again many times.
    And I also want to reiterate that you're going to have questions,
    and that is good, but if we here on this list are not to be driven
    insane by the ambiguities and misunderstandings, please, please,
    PLEASE post logs of the commands you type on this adventure when you
    ask them.

    Please.

    If you have questions, ask them.

    Like which version of a raid is the best at tolerating a failed drive, which >> give he best balance between redundancy and capacity.

    This is a complex subject. Before we get into it, what are you
    trying to achieve? Like, what is your end goal with these four
    drives?

    MD RAID isn't the only way to achieve redundancy. You also haven't
    explained why you need LVM. Depending on your needs, maybe a
    filesystem with redundancy and volume management features in it
    would be better. Like btrfs or zfs.
    May I miss-understood the wiki, xfs is stated as not being complete for
    linux, a zfx is I think commercial?
    Can you update that?


    Given the problems you had with MD RAID in the past I still maintain
    that you'd likely be better off just getting a storage appliance of
    some kind.
    One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm
    has not fussed about it, and smartctl seems to say its ok after testing.
    Other than that the gui access delay (30+ seconds) problems I have did
    NOT go away when I moved /home off the raid to another SSD, so I may
    move it back. One of the reasons I ma rsync'ing this /home back to it
    every other day or so, takes < 5 minutes.

    Thanks,
    Andy


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Andy Smith on Fri Feb 16 07:20:01 2024
    On 2/15/24 16:20, Andy Smith wrote:
    Hi,

    On Thu, Feb 15, 2024 at 03:59:30PM -0500, gene heskett wrote:
    Now the question remains howinhell do I put a label on a drive
    such that it does survive making a raid or lvm device with it? To
    not have a way to id its the drive in slot n of a multislot rack
    stops me in my tracks.

    Given that an MD RAID array or a LVM Logical Volume may be spread
    across many different underlying storage devices, the question
    doesn't make sense. Due to the fact that filesystems go on block
    devices, and RAID arrays and LVM LVs can be block devices, a
    filesystem label in that instance would represent possibly multiple underlying storage devices. So step back and tell us what are you
    actually trying to achieve, rather than insisting on your X solution
    to your Y problem.

    Suppose you have the MD array /dev/md42. What are you conceptually
    wanting to do with that in relation to labels of some kind? What
    information is it that you want?

    Support you have LVM logical volume /dev/myvg/mylv. What are you
    conceptually wanting to do with that in relation to labels of some
    kind? What information is it that you want?

    I want to know with absolute certainty, with of the 4 drives in that
    raid10, actually has a belly ache. When it has a belly ache. I can't see
    any reason on this ball of rock and water, why I should be expected to
    replace a drive at a time until the belly ache goes away.

    Particularly with these gigastones, I 5 of them but when all are plugged in >> there are only 3 becauae there are 2 pairs of matching serial numbers in the >> by-id output, by-id sees all 5 drives, but udev see's only the unique
    serial numbers. gparted can change the devices blkid, getting a new one from >> rng so while you all think that's the greatest thing since bottled beer, I >> know better.

    Once you explain what information you're trying to get when you
    start with an LVM or MD device, I can probably advise how to get it,
    but just to make clear: I don't think it's a good idea to continue
    to use such broken devices. We don't need to debate that since I
    know you've been posting about that a lot and clearly have decided
    to push ahead. I just think you haven't seen the end of the problems
    with that issue.

    Regards,
    Andy


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to David Wright on Fri Feb 16 07:40:01 2024
    On 2/15/24 16:20, David Wright wrote:
    On Thu 15 Feb 2024 at 20:44:52 (+0000), Andy Smith wrote:
    On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
    On 2/15/24 11:21, Andy Smith wrote:
    You asked if "labels" would survive their associated partition being
    put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem
    labels".

    I'm still confused and it is not all the well clarified by looking at
    gparted, a shot of which I posted.

    This could all be answered easily if you'd just post the copy-paste
    of your terminal scrollback for what you actually did. Hopefully you
    don't now object to me asking what you meant since apparently even
    you do not know if you mean partition names or filesystem labels.
    From what you posted it now sounds like labels on the ext4
    filesystems that you created.

    Gene effectively shoots himself in the foot by using gparted (GUI)
    instead of, say, gdisk where it's easy to paste what was done, or
    for someone, say me, to post an example:

    # gdisk /dev/sdz
    GPT fdisk (gdisk) version 1.0.3

    Partition table scan:
    MBR: not present
    BSD: not present
    APM: not present
    GPT: not present

    Creating new GPT entries.

    Command (? for help): o
    This option deletes all partitions and creates a new protective MBR.
    Proceed? (Y/N): y

    Command (? for help): p
    Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
    Model: Desktop
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 3907029134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 3907029101 sectors (1.8 TiB)

    Number Start (sector) End (sector) Size Code Name

    Command (? for help): n
    Partition number (1-128, default 1):
    First sector (34-3907029134, default = 2048) or {+-}size{KMGTP}:
    Last sector (2048-3907029134, default = 3907029134) or {+-}size{KMGTP}:
    Current type is 'Linux filesystem'
    Hex code or GUID (L to show codes, Enter = 8300):
    Changed type of partition to 'Linux filesystem'

    Command (? for help): c
    Using 1
    Enter name: Lulu01

    Command (? for help): i
    Using 1
    Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
    Partition unique GUID: 37CF9EDF-C695-428E-9889-2F52C40DFCA5
    First sector: 2048 (at 1024.0 KiB)
    Last sector: 3907029134 (at 1.8 TiB)
    Partition size: 3907027087 sectors (1.8 TiB)
    Attribute flags: 0000000000000000
    Partition name: 'Lulu01'

    Command (? for help): w

    Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    PARTITIONS!!

    Do you want to proceed? (Y/N): y
    OK; writing new GUID partition table (GPT) to /dev/sdb.
    The operation has completed successfully.
    #

    # gdisk -l /dev/sdz
    GPT fdisk (gdisk) version 1.0.3

    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present

    Found valid GPT with protective MBR; using GPT.
    Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
    Model: Desktop
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 3907029134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 2014 sectors (1007.0 KiB)

    Number Start (sector) End (sector) Size Code Name
    1 2048 3907029134 1.8 TiB 8300 Lulu01
    #

    Cheers,
    David.

    .
    And this "partition" name survives?, and can be unique?, and can be used
    in a mount cmd? That's how I'll do it then. This if all 3 questions
    above can be answered with a yes is the answer I've been trying to
    squeeze out all along. Thank you.

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anssi Saari@21:1/5 to Stefan Monnier on Fri Feb 16 08:50:01 2024
    Stefan Monnier <monnier@iro.umontreal.ca> writes:

    - Use an additional tiny dummy partition in which you can put any info
    you like.

    This seems to be what Microsoft likes to do. At least I had the pleasure
    of tossing a "Microsoft reserved" partition out from my desktop
    recently, I think the Windows 10 installer created that but didn't use
    it. It was just 16 MB of zeros in a very inconvenient location.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From debian-user@howorth.org.uk@21:1/5 to gene heskett on Fri Feb 16 13:50:01 2024
    gene heskett <gheskett@shentel.net> wrote:
    On 2/15/24 15:45, Andy Smith wrote:

    MD RAID isn't the only way to achieve redundancy. You also haven't explained why you need LVM. Depending on your needs, maybe a
    filesystem with redundancy and volume management features in it
    would be better. Like btrfs or zfs.
    May I miss-understood the wiki, xfs is stated as not being complete
    for linux, a zfx is I think commercial?
    Can you update that?

    Sorry, which wiki page do you think says XFS is not complete?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Fri Feb 16 15:40:01 2024
    Hello,

    On Fri, Feb 16, 2024 at 01:16:59AM -0500, gene heskett wrote:
    On 2/15/24 16:20, Andy Smith wrote:
    Suppose you have the MD array /dev/md42. What are you conceptually
    wanting to do with that in relation to labels of some kind? What information is it that you want?

    Support you have LVM logical volume /dev/myvg/mylv. What are you conceptually wanting to do with that in relation to labels of some
    kind? What information is it that you want?

    I want to know with absolute certainty, with of the 4 drives in that raid10, actually has a belly ache. When it has a belly ache.

    So this is an example of you moving the goal posts. You started off
    by saying you needed to identify something just from the array
    device name, but now you say you need to identify which drive in the
    array has a problem (exact problem not specified).

    The /proc/mdstats file shows all the devices that are in all the MD
    arrays. Any time the kernel has problems with a device it logs the
    name of the actual device (not the array etc.) in the system log. If
    the problems are bad enough then the MD driver notices and removes
    the device from the array.

    This is normal-looking content of /proc/mdstat:

    $ cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md1 : active raid1 sda3[1] nvme0n1p3[0]
    243316736 blocks super 1.2 [2/2] [UU]
    bitmap: 1/2 pages [4KB], 65536KB chunk

    Where it says [UU] it would say [_U] or [U_] if one of those devices
    had been removed, and in the list of devices the one that's failed
    would have an (F) after it.

    But I'm fairly sure that in all your posts about your RAID-10 people
    have been through this with you multiple times, so this must not
    actually be the information that you are after.

    Furthermore I do not understand how your idea of labelling drives
    (or partitions or filesystems) would ever give you this information
    even if it had worked.

    If you mean that you have system logs that say for example that
    sda1 has problems, and you want to find out what sda1 actually is,
    well I already showed you one way: by looking in /dev/disk/by-id/.
    There's also "smartctl -i /dev/sda", and others have posted other
    ways.

    If you don't mean that, then tell us what actual information you are
    starting from, and what you hope to get from there. "My array has
    problems, how do I find the problem drive within it" is too vague
    because we don't know what "my array has problems" actually means.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Fri Feb 16 15:50:01 2024
    Hi,

    On Fri, Feb 16, 2024 at 01:32:26AM -0500, gene heskett wrote:
    On 2/15/24 16:20, David Wright wrote:
    # gdisk -l /dev/sdz
    GPT fdisk (gdisk) version 1.0.3

    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present

    Found valid GPT with protective MBR; using GPT.
    Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
    Model: Desktop
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 3907029134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 2014 sectors (1007.0 KiB)

    Number Start (sector) End (sector) Size Code Name
    1 2048 3907029134 1.8 TiB 8300 Lulu01
    #
    .
    And this "partition" name survives?

    No, because it's a filesystem label for the ext4 fs created on
    /dev/sdz1. If sdz1 is turned into an LVM Physical Volume, there
    won't be an ext4 filesystem on it any more. If sdz1 is turned into a
    member of an MD array, there won't be an ext4 filesystem on it any
    more. The labels go with the filesystem.

    and can be unique?

    I don't know what that means to you or why it is useful.

    and can be used in a mount cmd?

    Once the RAID and/or LVM is set up and a filesystem put on it, that
    filesystem can be mounted by label just like any filesystem can, but
    that filesystem may have multiple devices underneath it owing to the
    fact that it's on RAID and/or LVM, so there is no information you
    can put in its label that will tell you anything about those
    underlying devices.

    if all 3 questions above can be answered with a yes is the answer
    I've been trying to squeeze out all along.

    You've not yet been clear about what you want, but from what little
    information you have provided you've been told multiple times by
    multiple people that filesystem labels won't help.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Fri Feb 16 16:00:01 2024
    Hi,

    On Thu, Feb 15, 2024 at 08:44:26PM -0500, gene heskett wrote:
    On 2/15/24 15:45, Andy Smith wrote:
    MD RAID isn't the only way to achieve redundancy. You also haven't explained why you need LVM. Depending on your needs, maybe a
    filesystem with redundancy and volume management features in it
    would be better. Like btrfs or zfs.
    May I miss-understood the wiki, xfs is stated as not being complete for linux, a zfx is I think commercial?
    Can you update that?

    I'd rather not try to explain XFS and ZFS to you when it's not even
    clear what you're trying to achieve. In all likelihood you will not
    need to use either XFS or ZFS.

    Also we can't correct a wiki article without knowing what it is…

    the gui access delay (30+ seconds) problems I have did NOT go away
    when I moved /home off the raid to another SSD

    More evidence that those problems had nothing to do with RAID or the
    storage devices you used in your RAID, but is something broken in
    your desktop software setup. Unfortunately I have no idea how to
    debug that.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to debian-user@howorth.org.uk on Fri Feb 16 16:40:02 2024
    On 2/16/24 07:46, debian-user@howorth.org.uk wrote:
    gene heskett <gheskett@shentel.net> wrote:
    On 2/15/24 15:45, Andy Smith wrote:

    MD RAID isn't the only way to achieve redundancy. You also haven't
    explained why you need LVM. Depending on your needs, maybe a
    filesystem with redundancy and volume management features in it
    would be better. Like btrfs or zfs.
    May I miss-understood the wiki, xfs is stated as not being complete
    for linux, a zfx is I think commercial?
    Can you update that?

    Sorry, which wiki page do you think says XFS is not complete?

    .
    I wasn't awake enough to bookmark it. I'm not done with wiki yet, if I
    run across it again I'll post the link.

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to gene heskett on Fri Feb 16 21:00:01 2024
    On 2/15/24 12:19, gene heskett wrote:
    On 2/15/24 11:21, Andy Smith wrote:
    ... redundancy plans ...

    Like which version of a raid is the best at tolerating a failed drive,
    which give he best balance between redundancy and capacity.


    Given a small number of disks, N (say, 4 to 8), the obvious choices are
    RAID5, RAID6, and RAID10.


    Regarding redundancy:

    * RAID5 can tolerate the loss of any one disk.

    * RAID6 can tolerate the loss of any two disks.

    * RAID10 can tolerate the loss of any one disk. If you get lucky,
    RAID10 can tolerate the loss of multiple disks if each lost disk is in a different mirror.


    Regarding capacity, if each disk stores B bytes:

    * RAID5 gives you (N-1) * B capacity.

    * RAID6 gives you (N-2) * B capacity.

    * RAID10 gives you (N/2) * B capacity.


    If each disk has performance P:

    * RAID5 has performance ranging from P to (N-1) * P.

    * RAID6 has performance ranging from P to (N-2) * P.

    * RAID10 with M mirrors of D disks each has write performance M * P and
    read performance M * D * P.


    Other factors to consider:

    * All of the above needs to be reconsidered when one or more disks fail
    -- e.g. the array is operating in degraded mode.

    * All of the above needs to be reconsidered when a failed disk has been replaced -- e.g. the array is resilvering.

    * All of the above needs to be reconsidered when disk(s) fail during resilvering (!).

    * RAID5 and RAID6 typically do not allow changes to topology -- e.g. the
    number of disks in the array and the number of bytes used in each disk.

    * RAID0, RAID1, and JBOD may allow some changes to topology. What is
    allowed depends upon implementation.

    * With more disks, you may be able to create hierarchies -- e.g. stripe
    of mirrors (RAID10). Redundancy, capacity, and/or performance under operational, degraded, resilvering, etc., modes all need to be reconsidered.

    * Hot spares can be added. Again, reconsider everything.

    * And more.


    So, it's a multi-dimensional problem and there are many combinations and permutations. The more disks you have, the more possibilities you have.
    I suggest picking two or three, and exploring them using a dedicated computer, a snapshot of your data, and your workload.


    I am currently using ZFS and a stripe of 2 mirrors with 2 @ 3 TB HDD's
    each and SSD read cache. I expect the same could be implemented with
    mdadm(8), lvm(8), bcache, dm-cache, btrfs, and others.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to gene heskett on Fri Feb 16 21:10:01 2024
    On 2/15/24 12:59, gene heskett wrote:
    ... gigastones, I 5 of them but when all
    are plugged in there are only 3 becauae there are 2 pairs of matching
    serial numbers ...


    I recall 2 pairs of SSD's with matching serial numbers. Please remove
    one SSD of each pair so that the remaining SSD's all have unique serial numbers. Return them for a refund while you still can. If you cannot,
    put them in another computer or put them on the shelf as spares.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Wright@21:1/5 to David Christensen on Fri Feb 16 21:10:01 2024
    On Fri 16 Feb 2024 at 11:59:40 (-0800), David Christensen wrote:
    On 2/15/24 12:59, gene heskett wrote:
    ... gigastones, I 5 of them but when all
    are plugged in there are only 3 becauae there are 2 pairs of
    matching serial numbers ...

    I recall 2 pairs of SSD's with matching serial numbers. Please remove
    one SSD of each pair so that the remaining SSD's all have unique
    serial numbers. Return them for a refund while you still can. If you cannot, put them in another computer or put them on the shelf as
    spares.

    Surely split them between at least two computers, so that
    neither contains a duplicate?

    Cheers,
    David.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Wright@21:1/5 to gene heskett on Fri Feb 16 21:10:01 2024
    On Fri 16 Feb 2024 at 01:32:26 (-0500), gene heskett wrote:
    On 2/15/24 16:20, David Wright wrote:
    On Thu 15 Feb 2024 at 20:44:52 (+0000), Andy Smith wrote:
    On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
    On 2/15/24 11:21, Andy Smith wrote:
    You asked if "labels" would survive their associated partition being put into LVM.

    I said, "yes if you mean partition names, no if you mean filesystem labels".

    I'm still confused and it is not all the well clarified by looking at gparted, a shot of which I posted.

    This could all be answered easily if you'd just post the copy-paste
    of your terminal scrollback for what you actually did. Hopefully you don't now object to me asking what you meant since apparently even
    you do not know if you mean partition names or filesystem labels.
    From what you posted it now sounds like labels on the ext4
    filesystems that you created.

    Gene effectively shoots himself in the foot by using gparted (GUI)
    instead of, say, gdisk where it's easy to paste what was done, or
    for someone, say me, to post an example:

    [ … skipped over creating the partition table … ]

    # gdisk -l /dev/sdz
    GPT fdisk (gdisk) version 1.0.3

    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present

    Found valid GPT with protective MBR; using GPT.
    Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
    Model: Desktop
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 3907029134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 2014 sectors (1007.0 KiB)

    Number Start (sector) End (sector) Size Code Name
    1 2048 3907029134 1.8 TiB 8300 Lulu01
    #

    And this "partition" name survives?, and can be unique?, and can be
    used in a mount cmd? That's how I'll do it then. This if all 3
    questions above can be answered with a yes is the answer I've been
    trying to squeeze out all along. Thank you.

    Yes, the partition name (PARTLABEL) is in the partition table, not
    inside the partition itself. It's as unique as you make it, because
    you choose it. I've scrawled the names of my disks on the casing with
    a magic marker for 25 years, from adam (6.4GB fujitsu) to wick (2TB WD).
    The PARTLABELs and LABELs use that name as the stem, capitalised and
    lowercase respectively.

    As for using it with the mount command, that depends on what the
    partition contains. For a straightforward filesystem, you can, as
    described by man mount (under Indicating the device and filesystem).

    But I wouldn't, and I don't think you want to, as I believe you want
    to use the partition as /part/ of something larger.

    Whether you /can/ use it to mount depends on what the partition
    contains. I don't use LVM or RAID, so I can't advise you there, except
    to say that you wouldn't want to mount one piece of a larger structure,
    AFAIK. But in my case, I use LUKS encryption, and I can demonstrate
    what happens:

    $ sudo udisksctl unlock --block-device /dev/disk/by-partlabel/Lulu01
    Passphrase:
    Unlocked /dev/sdc1 as /dev/dm-2.
    $

    # mount /dev/disk/by-partlabel/Lulu01 /media/lulu01
    mount: /media/lulu01: unknown filesystem type 'crypto_LUKS'.
    #

    You don't want to mount the partition, but the filesystem /within/ the partition:

    # mount LABEL=lulu01 /media/lulu01
    #

    Of course, I don't normally use mount as root because I have an entry
    in /etc/fstab:

    LABEL=lulu01 /media/lulu01 ext4 rw,errors=remount-ro,user,noauto

    and I use a bash function called, surprisingly, lulu, as there's only
    one partition on the disk:

    $ type lulu
    lulu is a function
    lulu ()
    {
    sudo udisksctl unlock --block-device /dev/disk/by-partlabel/Lulu01 && mount /media/lulu01
    }
    $

    thus:

    $ lulu
    Passphrase:
    Unlocked /dev/sdc1 as /dev/dm-2.
    $

    But I would emphasise that, having unlocked the partition, I mount
    the filesystem because it stands alone. It's not part of a RAID,
    LVM, or whatever, that might need assembling with other components
    before mounting the whole ensemble.

    Cheers,
    David.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to gene heskett on Fri Feb 16 21:20:01 2024
    On 2/15/24 17:44, gene heskett wrote:
    One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm
    has not fussed about it, and smartctl seems to say its ok after testing.
     Other than that the gui access delay (30+ seconds) problems I have did
    NOT go away when I moved /home off the raid to another SSD, so I may
    move it back. One of the reasons I ma rsync'ing this /home back to it
    every other day or so, takes < 5 minutes.


    Please get a small SSD, do a fresh install, and test for the access
    delay. If the delay is not present, incrementally add and test
    applications. If you encounter the delay, please stop and post the
    details; console sessions are best. If not, then connect the disks with
    /home and test. If you encounter the delay, then please stop and post
    the details. If you do not encounter the delay, then your system is
    fixed. Take a Clonezilla image.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Fri Feb 16 21:50:01 2024
    One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm has >> not fussed about it, and smartctl seems to say its ok after testing.
     Other than that the gui access delay (30+ seconds) problems I have did
    NOT go away when I moved /home off the raid to another SSD, so I may move
    it back. One of the reasons I ma rsync'ing this /home back to it every
    other day or so, takes < 5 minutes.
    Please get a small SSD, do a fresh install, and test for the access delay.
    If the delay is not present, incrementally add and test applications.
    If you encounter the delay, please stop and post the details; console sessions are best. If not, then connect the disks with /home and test.
    If you encounter the delay, then please stop and post the details. If you
    do not encounter the delay, then your system is fixed.
    Take a Clonezilla image.

    FWIW, my crystal ball says "30s => software timeout rather than hardware problem"


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to gene heskett on Fri Feb 16 21:30:01 2024
    On 2/15/24 22:16, gene heskett wrote:
    I want to know with absolute certainty, with of the 4 drives in that
    raid10, actually has a belly ache. When it has a belly ache. I can't see
    any reason on this ball of rock and water, why I should be expected to replace a drive at a time until the belly ache goes away.


    I seem to recall the Samsung 1 TB SSD's in your /home RAID10 were worn
    out. I suggest installing the 2 TB M.2 WD Black, partitioning it with
    GPT, creating one large partition, mounting it at /data, and copying all
    of the data from /home to /data before the SSD's and RAID fail completely.


    I recently had an Intel SSD 520 Series 180 GB go from operational to
    toast, with nothing in between. If that happens to one of those Samsung
    1 TB SSD's, there will be no way for the RAID10 to correct the bad
    blocks on the other other SSD. You will corrupt and lose data.


    I leave /home on my root partition. My working directories are in CVS.
    The only ephemeral data is in $HOME/.thunderbird. I have a mail filter
    that copies incoming mail to a second folder on the IMAP server. I Bcc outgoing mail to another mail account. If my OS disk dies, I restore
    the image from last month, update Debian, check out my work, reconnect Thunderbird to the various e-mail servers, and clean up the Thunderbird
    folders as required. No data is lost.


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Christensen@21:1/5 to Stefan Monnier on Fri Feb 16 22:10:01 2024
    On 2/16/24 12:46, Stefan Monnier wrote:
    One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm has >>> not fussed about it, and smartctl seems to say its ok after testing.
     Other than that the gui access delay (30+ seconds) problems I have did >>> NOT go away when I moved /home off the raid to another SSD, so I may move >>> it back. One of the reasons I ma rsync'ing this /home back to it every
    other day or so, takes < 5 minutes.
    Please get a small SSD, do a fresh install, and test for the access delay. >> If the delay is not present, incrementally add and test applications.
    If you encounter the delay, please stop and post the details; console
    sessions are best. If not, then connect the disks with /home and test.
    If you encounter the delay, then please stop and post the details. If you >> do not encounter the delay, then your system is fixed.
    Take a Clonezilla image.

    FWIW, my crystal ball says "30s => software timeout rather than hardware problem"


    +1


    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Stefan Monnier on Fri Feb 16 22:20:01 2024
    On 2/16/24 15:47, Stefan Monnier wrote:
    One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm has >>> not fussed about it, and smartctl seems to say its ok after testing.
     Other than that the gui access delay (30+ seconds) problems I have did >>> NOT go away when I moved /home off the raid to another SSD, so I may move >>> it back. One of the reasons I ma rsync'ing this /home back to it every
    other day or so, takes < 5 minutes.
    Please get a small SSD, do a fresh install, and test for the access delay. >> If the delay is not present, incrementally add and test applications.
    If you encounter the delay, please stop and post the details; console
    sessions are best. If not, then connect the disks with /home and test.
    If you encounter the delay, then please stop and post the details. If you >> do not encounter the delay, then your system is fixed.
    Take a Clonezilla image.

    FWIW, my crystal ball says "30s => software timeout rather than hardware problem"


    Stefan

    We are on the same page, but what is causing the timeout?
    .

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Stefan Monnier on Sat Feb 17 03:30:02 2024
    Hello,

    On Fri, Feb 16, 2024 at 03:46:54PM -0500, Stefan Monnier wrote:
    FWIW, my crystal ball says "30s => software timeout rather than hardware problem"

    Back in a previous thread Gene was saying that it's only evident
    when some GUI app brings up a file requester to load or save
    something so that was my thought too. In particular that it might be
    doing some kind of failed network activity looking for network
    shares or something.

    The thing is, we've also seen Gene's computers with strange things
    like syntax errors in /etc/nsswitch.conf and /etc/hosts, avahi bits
    manually rm'd, resolv.conf whacked with chattr +i and so on, so
    it's also no surprise to me that this is difficult to debug.

    David's suggestion of starting with a minimal install might be the
    only way to do it.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to David Wright on Sat Feb 17 03:20:01 2024
    Hello,

    On Fri, Feb 16, 2024 at 02:02:59PM -0600, David Wright wrote:
    On Fri 16 Feb 2024 at 14:48:12 (+0000), Andy Smith wrote:
    No, because it's a filesystem label for the ext4 fs created on
    /dev/sdz1. If sdz1 is turned into an LVM Physical Volume, there
    won't be an ext4 filesystem on it any more. If sdz1 is turned into a
    member of an MD array, there won't be an ext4 filesystem on it any
    more. The labels go with the filesystem.

    It isn't a filesystem LABEL.

    Oh dear, I am lost. I don't use gparted but at least one person in
    this thread has said that Gene created a filesystem label not a
    partition name, and Gene doesn't know which he created, so I've gone
    from guessing partition name to fs label and now back to partition
    name again.

    I'm totally willing to believe that you know what you've created
    there though, so fair enough.

    You've not yet been clear about what you want, but from what little information you have provided you've been told multiple times by
    multiple people that filesystem labels won't help.
    ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑

    … which would be moot if only Gene could create partition PARTLABELs successfully.

    Sure, but we still don't know what Gene is trying to do or why
    partition names would be useful to him so I am kind of sceptical
    that this leads anywhere.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to David Christensen on Sat Feb 17 06:40:01 2024
    On Fri, Feb 16, 2024 at 12:12:06PM -0800, David Christensen wrote:
    On 2/15/24 17:44, gene heskett wrote:

    [...]

     Other than that the gui access delay (30+ seconds) problems I have did NOT go away when I moved /home off the raid to another SSD [...]

    I think at this point few are surprised by that. Last round of debugging
    we pretty much eliminated disk access as likey cause of those delays.

    The most hopeful cause for a candidate, IIRC, was some thingy deep in the DE trying to access an unavailable resource.

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCZdBFewAKCRAFyCz1etHa RtJhAJ9whzXK1ldhzPS99pBH7yRA3geI6QCfbxsCa1QZ/WOa/I6HFZQpuVhzq74=
    =AXsZ
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to Stefan Monnier on Sat Feb 17 06:40:01 2024
    On Fri, Feb 16, 2024 at 03:46:54PM -0500, Stefan Monnier wrote:

    [...]

    FWIW, my crystal ball says "30s => software timeout rather than hardware problem"

    and whithin that, a network thingy. Ah, were it 90s, it'd be a DNS thingy.
    But 30s...

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCZdBGEQAKCRAFyCz1etHa RhXpAJ9kcpCwuvhwPeq/lhF+OD0FWG1V4gCbBtA6rB/38ZAnLGCQ1sNzF9hlVe0=
    =UOta
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Andy Smith on Sat Feb 17 06:50:01 2024
    On 2/16/24 21:13, Andy Smith wrote:
    Hello,

    On Fri, Feb 16, 2024 at 02:02:59PM -0600, David Wright wrote:
    On Fri 16 Feb 2024 at 14:48:12 (+0000), Andy Smith wrote:
    No, because it's a filesystem label for the ext4 fs created on
    /dev/sdz1. If sdz1 is turned into an LVM Physical Volume, there
    won't be an ext4 filesystem on it any more. If sdz1 is turned into a
    member of an MD array, there won't be an ext4 filesystem on it any
    more. The labels go with the filesystem.

    It isn't a filesystem LABEL.

    Oh dear, I am lost. I don't use gparted but at least one person in
    this thread has said that Gene created a filesystem label not a
    partition name, and Gene doesn't know which he created, so I've gone
    from guessing partition name to fs label and now back to partition
    name again.

    I'm totally willing to believe that you know what you've created
    there though, so fair enough.

    You've not yet been clear about what you want, but from what little
    information you have provided you've been told multiple times by
    multiple people that filesystem labels won't help.
    ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ >>
    … which would be moot if only Gene could create partition PARTLABELs
    successfully.

    Sure, but we still don't know what Gene is trying to do or why
    partition names would be useful to him so I am kind of sceptical
    that this leads anywhere.

    That part if the ^%$ drives ever get here, I just looked at the front
    deck and it has 2" of fresh white stuff on it.

    To describe what I am building, this is a 5 slot bare drive cage. You
    could throw tom cats thru it from most angles so I printed pretty sides
    for it.

    I've printed drawers to fill those slots. The top slot has a bpi-m5 in
    it, the bottom slot has a 5 volt 10 amp psu in it. slot 2 will have 2 of
    those nearly 4T SSD's in a 2 drive adapter, with full disk partitions on
    them, so obviously I should name the top one as "si-pwr-s2t". the bottom
    one then s/b si-pwr-s2b
    slot-3 then s/b si-pwr-s3t and si-pwr-s3b.
    slot-4 then is giga-s4t1 and giga-s4t2. ditto for the bottom one. named giga-s4b1 and giga-s4b2. 1 partition to hold amanda's database and one
    to serve as amanda's holding disk.

    Whats so meaningless to you that you can't see the utility in that?
    That has not been explained, so please educate me as to why you think
    its worthless?
    Thanks,
    Andy


    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Sat Feb 17 18:50:01 2024
    Hi,

    On Sat, Feb 17, 2024 at 12:46:25AM -0500, gene heskett wrote:

    [38 lines of irrelevance snipped out of a 71 line email]

    I've printed drawers to fill those slots. The top slot has a bpi-m5 in it, the bottom slot has a 5 volt 10 amp psu in it. slot 2 will have 2 of those nearly 4T SSD's in a 2 drive adapter, with full disk partitions on them, so obviously I should name the top one as "si-pwr-s2t". the bottom one then s/b si-pwr-s2b
    slot-3 then s/b si-pwr-s3t and si-pwr-s3b.
    slot-4 then is giga-s4t1 and giga-s4t2. ditto for the bottom one. named giga-s4b1 and giga-s4b2. 1 partition to hold amanda's database and one to serve as amanda's holding disk.

    Whats so meaningless to you that you can't see the utility in that?

    I've got no issue with putting a drive identifier on the physical
    caddy/drawer that holds that drive. I do it myself. You have not
    ever before in this thread mentioned this, so neither I nor anyone
    else has objected to it.

    What I question the value of, is putting a drive identifier into a
    partlabel when the id of the partition will contain all of the same information.

    I have also asked you several times what it is you intend to do
    with that information in the context of a RAID array or LVM LV and
    you haven't yet been able to tell me. The closest you have come so
    far is saying, "I want to identify a drive when the array has
    problems". As you don't specify what those problems might be, all I
    am able to say to that is that you can either find the problem
    device from your logs or by listing the devices in the array/LV, and
    from there map to exact model and serial number from what's in the /dev/disk/by-id/.

    Now, I understand that you have multiple drives that have the same
    model and serial number. I accept that if you're going to use
    multiple of these in the same machine then that makes using by-id/
    impossible. I've advised that I would never use multiple of these in
    the same machine because they are broken and will likely cause other
    problems further down the line.

    So if you want to say: despite the duplicate serial number issue I
    am determined to use multiple of these drives, so by-id/ is useless
    to me and I will instead replicate that info in partlabels and use /dev/disk/by-partlabel/, then okay! I don't agree with that course
    of action, but it is at least a cogent argument. So say if that's
    the case and we can just move on.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From debian-user@howorth.org.uk@21:1/5 to gene heskett on Sat Feb 17 21:20:01 2024
    gene heskett <gheskett@shentel.net> wrote:
    On 2/16/24 15:47, Stefan Monnier wrote:
    One of the 1T samsungs in the md raid10 isn't entirely happy but
    mdadm has not fussed about it, and smartctl seems to say its ok
    after testing. Other than that the gui access delay (30+ seconds)
    problems I have did NOT go away when I moved /home off the raid
    to another SSD, so I may move it back. One of the reasons I ma
    rsync'ing this /home back to it every other day or so, takes < 5
    minutes.
    Please get a small SSD, do a fresh install, and test for the
    access delay. If the delay is not present, incrementally add and
    test applications. If you encounter the delay, please stop and
    post the details; console sessions are best. If not, then connect
    the disks with /home and test. If you encounter the delay, then
    please stop and post the details. If you do not encounter the
    delay, then your system is fixed. Take a Clonezilla image.

    FWIW, my crystal ball says "30s => software timeout rather than
    hardware problem"


    Stefan

    We are on the same page, but what is causing the timeout?

    You have to follow the steps David suggested including posting the
    details here as asked, before anybody will be able to answer your
    question!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to tomas@tuxteam.de on Sun Feb 18 02:10:03 2024
    On 2/17/24 00:35, tomas@tuxteam.de wrote:
    On Fri, Feb 16, 2024 at 12:12:06PM -0800, David Christensen wrote:
    On 2/15/24 17:44, gene heskett wrote:

    [...]

     Other than that the gui access delay (30+ seconds) problems I have did >>> NOT go away when I moved /home off the raid to another SSD [...]

    I think at this point few are surprised by that. Last round of debugging
    we pretty much eliminated disk access as likey cause of those delays.

    The most hopeful cause for a candidate, IIRC, was some thingy deep in the DE trying to access an unavailable resource.

    Cheers
    Is there some way to identify that roadblock?

    It sure seems to me there ought to be a way to identify whatever it is
    that is causing it..

    Take care, stay warm and well Tomas

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to gene heskett on Sun Feb 18 02:30:01 2024
    On 2/17/24 00:47, gene heskett wrote:
    On 2/16/24 21:13, Andy Smith wrote:
    Hello,

    On Fri, Feb 16, 2024 at 02:02:59PM -0600, David Wright wrote:
    On Fri 16 Feb 2024 at 14:48:12 (+0000), Andy Smith wrote:
    No, because it's a filesystem label for the ext4 fs created on
    /dev/sdz1. If sdz1 is turned into an LVM Physical Volume, there
    won't be an ext4 filesystem on it any more. If sdz1 is turned into a
    member of an MD array, there won't be an ext4 filesystem on it any
    more. The labels go with the filesystem.

    It isn't a filesystem LABEL.

    Oh dear, I am lost. I don't use gparted but at least one person in
    this thread has said that Gene created a filesystem label not a
    partition name, and Gene doesn't know which he created, so I've gone
    from guessing partition name to fs label and now back to partition
    name again.

    I'm totally willing to believe that you know what you've created
    there though, so fair enough.

    You've not yet been clear about what you want, but from what little
    information you have provided you've been told multiple times by
    multiple people that filesystem labels won't help.
                            ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑

    … which would be moot if only Gene could create partition PARTLABELs
    successfully.

    Which I have found can also be done with gparted, so the 1st 2 drives
    which will be put in slot 2 as the Top and Bottom drives in that 2 drive adaptor in slot 2, have had their partitions labeled as SIPWRS2T and
    SIPWRS2B. And labeled as such with a P-Touch. The other 2 that just
    walked in the door, are still cold enough to sweat if unsealed.

    Sure, but we still don't know what Gene is trying to do or why
    partition names would be useful to him so I am kind of sceptical
    that this leads anywhere.

    That part if the ^%$ drives ever get here, I just looked at the front
    deck and it has 2" of fresh white stuff on it.

    To describe what I am building, this is a 5 slot bare drive cage. You
    could throw tom cats thru it from most angles so I printed pretty sides
    for it.

    I've printed drawers to fill those slots.  The top slot has a bpi-m5 in
    it, the bottom slot has a 5 volt 10 amp psu in it. slot 2 will have 2 of those nearly 4T SSD's in a 2 drive adapter, with full disk partitions on them, so obviously I should name the top one as "si-pwr-s2t". the bottom
    one then s/b si-pwr-s2b
    slot-3 then s/b si-pwr-s3t and si-pwr-s3b.
    slot-4 then is giga-s4t1 and giga-s4t2. ditto for the bottom one. named giga-s4b1 and giga-s4b2.  1 partition to hold amanda's database and one
    to serve as amanda's holding disk.

    Whats so meaningless to you that you can't see the utility in that? That
    has not been explained, so please educate me as to why you think its worthless?
    Thanks,
    Andy


    Cheers, Gene Heskett, CET.

    Cheers, Gene Heskett, CET.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to gene heskett on Sun Feb 18 07:50:01 2024
    On Sat, Feb 17, 2024 at 07:59:52PM -0500, gene heskett wrote:
    On 2/17/24 00:35, tomas@tuxteam.de wrote:
    On Fri, Feb 16, 2024 at 12:12:06PM -0800, David Christensen wrote:
    On 2/15/24 17:44, gene heskett wrote:

    [...]

     Other than that the gui access delay (30+ seconds) problems I have did
    NOT go away when I moved /home off the raid to another SSD [...]

    I think at this point few are surprised by that. Last round of debugging
    we pretty much eliminated disk access as likey cause of those delays.

    The most hopeful cause for a candidate, IIRC, was some thingy deep in the DE
    trying to access an unavailable resource.

    Cheers
    Is there some way to identify that roadblock?

    It sure seems to me there ought to be a way to identify whatever it is that is causing it..

    No single path, alas. The most pin-pointed description we have is some
    editor blocking while trying to "open a file" (whatever those gooey
    thingies do in that situation). So perhaps stracing it and seeing whether
    it's blocking in a system call might give a clue.

    Wading through the logs around that delay might, too. One trick I sometimes
    use in those cases is to have a teminal open and create syslog messages
    (with logger) to have timestamps marking the start/end of the perceived
    delays.

    Take care, stay warm and well Tomas

    First signs of spring around here.

    Take care
    --
    tomás

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCZdGmQwAKCRAFyCz1etHa RokaAJ4ymZxBbEf33vwqGtrQTt7NaiH9kgCeIvGaIZHhhXScfY8+OhKuUfy6DoY=
    =2WiV
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)