• Automatically installing GRUB on multiple drives

    From Nicolas George@21:1/5 to All on Wed Jan 24 11:20:02 2024
    Hi.

    We have drives in mdadm RAID1.

    Since they are potential boot drives, we have to put a GPT on them.

    Since mdadm can only put its superblock at the end of the device (1.0),
    at the beginning of the device (1.1) and 4 Ko from the beginning (1.2),
    but they still have not invented 1.3 to have the metadata 17 Ko from the beginning or the end, which would be necessary to be compatible with
    GPT, we have to partition them and put the EFI system partition outside
    them.

    To keep things logical, we have the same partitions on all drives,
    including the EFI one. And GRUB is perfectly capable of booting the
    system (inside the LVM) inside the RAID inside the partition.

    Which leads me to wonder if there is an automated way to install GRUB on
    all the EFI partitions.

    The manual way is not that bad, but automated would be nice.

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Schmitt@21:1/5 to Nicolas George on Wed Jan 24 12:50:01 2024
    Hi,

    i cannot make qualified proposals for the GRUB question, but stumble over
    your technical statements.

    Nicolas George wrote:
    Since mdadm can only put its superblock at the end of the device (1.0),
    at the beginning of the device (1.1) and 4 Ko from the beginning (1.2),
    but they still have not invented 1.3 to have the metadata 17 Ko from the beginning or the end, which would be necessary to be compatible with
    GPT,

    Although it would be unusually small, it is possible to have a GPT of
    only 4 KiB of size:
    - 512 bytes for Protective MBR (the magic number of GPT)
    - 512 bytes for the GPT header block
    - 3 KiB for an array of 24 partition entries.

    Question is of course, whether any partition editor is willing to create
    such a small GPT. The internet says that sfdisk has "table-length" among
    its input "Header lines". So it would be a matter of learning and experimenting.
    (Possibly i would be faster with writing the first header blocks by
    hand, following UEFI specs or my cheat sheet in libisofs.)


    we have to partition them and put the EFI system partition outside
    them.

    Do you mean you partition them DOS-style ?
    If so, then a partition of type 0xEF could be used as system partition. Probably any partition type will do, because EFI is very eager to look
    into any partition with FAT filesystem.


    Have a nice day :)

    Thomas

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charles Curley@21:1/5 to Nicolas George on Wed Jan 24 15:30:01 2024
    On Wed, 24 Jan 2024 11:17:34 +0100
    Nicolas George <george@nsup.org> wrote:

    Which leads me to wonder if there is an automated way to install GRUB
    on all the EFI partitions.

    I'm not aware of any existing solutions.

    Perhaps a script based on:

    for i in a b c d e ; do echo /dev/sd$i ; grub-install /dev/sd$i ; done

    Or perhaps extract the relevant devices from the output of

    cat /proc/mdstat


    Although I found it simpler (and faster) to have all my system stuff on
    an SSD, and the RAID on four HDDs. Grub goes on the SSD and that's that.

    --
    Does anybody read signatures any more?

    https://charlescurley.com
    https://charlescurley.com/blog/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Wed Jan 24 15:50:01 2024
    Charles Curley (12024-01-24):
    Perhaps a script based on:

    Thanks, I know how to make scripts. My question ported specifically on
    making it automatic.

    Although I found it simpler (and faster) to have all my system stuff on
    an SSD, and the RAID on four HDDs. Grub goes on the SSD and that's that.

    If the SSD dies, your system does not boot. Somewhat wasting the benefit
    of RAID.

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Wed Jan 24 17:10:01 2024
    Nicolas George composed on 2024-01-24 15:39 (UTC+0100):

    Charles Curley (12024-01-24):

    Although I found it simpler (and faster) to have all my system stuff on
    an SSD, and the RAID on four HDDs. Grub goes on the SSD and that's that.

    If the SSD dies, your system does not boot. Somewhat wasting the benefit
    of RAID.

    Technically, quite true. However, OS and user data are very different. User data
    recreation and/or restoration can be as painful as impossible, justifying RAID. OS
    can be reinstalled rather easily in a nominal amount of time. A 120G SSD can hold
    multiple OS installations quite easily. A spare 120G SSD costs less than a petrol
    fillup. I stopped putting OS on RAID when I got my first SSD. My current primary
    PC has 5 18G OS installations, all bootable much more quickly than finding a suitable USB stick to rescue boot from.
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Wed Jan 24 21:00:01 2024
    Franco Martelli (12024-01-24):
    If I run "grub-install" with multiple device I got

    # LCALL=C grub-install /dev/sd[a-d]
    grub-install: error: More than one install device?.

    maybe it is a deprecated action for grub to install to multiple device, so this should it be investigated?

    Do you believe it used to work? To the better of my knowledge it never
    did.

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Wed Jan 24 21:10:01 2024
    Thomas Schmitt (12024-01-24):
    i cannot make qualified proposals for the GRUB question, but stumble over your technical statements.

    It was by far the most interesting reply. Better somebody who really
    understood the question, realized their limitations and knowingly
    replies with an interesting tangential than the opposite.

    Although it would be unusually small, it is possible to have a GPT of
    only 4 KiB of size:
    - 512 bytes for Protective MBR (the magic number of GPT)
    - 512 bytes for the GPT header block
    - 3 KiB for an array of 24 partition entries.

    Question is of course, whether any partition editor is willing to create
    such a small GPT. The internet says that sfdisk has "table-length" among
    its input "Header lines". So it would be a matter of learning and experimenting.

    Interesting. Indeed, “table-length: 4” causes sfdisk to only write 3 sectors at the beginning and 2 at the end. I checked it really does not
    write elsewhere.

    That makes it possible to use full-disk RAID on a UEFI boot drive. Very
    good news.

    we have to partition them and put the EFI system partition outside
    them.
    Do you mean you partition them DOS-style ?

    No, GPT. More and more firmwares will only boot with GPT. I think I met
    only once a firmware that booted UEFI, 32 bits, with a MBR.

    GPT
    ├─EFI
    └─RAID
    └─LVM (of course)

    Now, thanks to you, I know I can do:

    GPT
    ┊ RAID
    └───┤
    ├─EFI
    └─LVM

    It is rather ugly to have the same device be both a RAID with its
    superblock in the hole between GPT and first partition and the GPT in
    the hole before the RAID superblock, but it serves its purpose: the EFI partition is kept in sync over all devices.

    It still requires setting the non-volatile variables, though.

    Thanks.

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Nicolas George on Wed Jan 24 21:30:01 2024
    Hi,

    On Wed, Jan 24, 2024 at 11:17:34AM +0100, Nicolas George wrote:
    Since mdadm can only put its superblock at the end of the device (1.0),
    at the beginning of the device (1.1) and 4 Ko from the beginning (1.2),
    but they still have not invented 1.3 to have the metadata 17 Ko from the beginning or the end, which would be necessary to be compatible with
    GPT, we have to partition them and put the EFI system partition outside
    them.

    Sorry, what is the issue about being compatible with GPT?

    For example, here is one of the drives in a machine of mine, and it
    is a drive I boot from:

    $ sudo gdisk -l /dev/sda
    GPT fdisk (gdisk) version 1.0.3

    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present

    Found valid GPT with protective MBR; using GPT.
    Disk /dev/sda: 7501476528 sectors, 3.5 TiB
    Model: INTEL SSDSC2KG03
    Sector size (logical/physical): 512/4096 bytes
    Disk identifier (GUID): D97BD886-7F31-9E46-B454-6703BC90AF09
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 2048, last usable sector is 7501476494
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 0 sectors (0 bytes)

    Number Start (sector) End (sector) Size Code Name
    1 2048 1075199 524.0 MiB EF00
    2 1075200 3172351 1024.0 MiB FD00
    3 3172352 7366655 2.0 GiB FD00
    4 7366656 24143871 8.0 GiB FD00
    5 24143872 7501476494 3.5 TiB FD00

    Here, sda1 is an EFI System Partition and sda2 is a RAID-1 member
    that comprises /boot when assembled. It is md2 when assembled which
    has superblock format 1.2:

    sudo mdadm --detail /dev/md2
    /dev/md2:
    Version : 1.2
    Creation Time : Mon Jun 7 22:21:08 2021
    Raid Level : raid1
    Array Size : 1046528 (1022.00 MiB 1071.64 MB)
    Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
    Raid Devices : 2
    Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sun Jan 21 00:00:07 2024
    State : clean
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0

    Consistency Policy : resync

    Name : tanq:2 (local to host tanq)
    UUID : ea533a16:63523ac4:da6bf866:508f8f1d
    Events : 459

    Number Major Minor RaidDevice State
    0 259 2 0 active sync /dev/nvme0n1p2
    1 8 2 1 active sync /dev/sda2

    Thus, grub is installed to sda and nvme0n1.

    Have I made an error here?

    Which leads me to wonder if there is an automated way to install GRUB on
    all the EFI partitions.

    I just install it on each boot drive, but you have me worried now
    that there is something I am ignorant of.

    There is also the issue of making the ESP redundant. I'd like to put
    it in RAID but I've been convinced that it is a bad idea: firmware
    will not understand md RAID and though it may be able to read it
    (due to it being RAID-1, 1.2 superblock), if it writes to it then it
    will desync the RAID.

    There was a deeper discussion of this issue here:

    https://lists.debian.org/debian-user/2020/11/msg00455.html

    As you can see, more people were in favour of manually syncing ESP
    contents to backup ESPs on other drives so that firmware can choose
    (or be told to choose) a different ESP in event of trying to boot
    with device failure.

    I don't like it, but…

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Schmitt@21:1/5 to Nicolas George on Wed Jan 24 22:00:01 2024
    Hi,

    Nicolas George wrote:
    Interesting. Indeed, “table-length: 4” causes sfdisk to only write 3 sectors at the beginning and 2 at the end. I checked it really does not
    write elsewhere.
    That makes it possible to use full-disk RAID on a UEFI boot drive. Very
    good news.

    \o/

    (Nearly as good as Stefan Monnier's crystal ball. And that without understanding the dirty details which cause the need for a small partition table.)


    More and more firmwares will only boot with GPT. I think I met
    only once a firmware that booted UEFI, 32 bits, with a MBR

    The Debian installation and live ISOs have MBR partitions with only a
    flimsy echo of GPT. There is a GPT header block and an entries array.
    But it does not get announced by a Protective MBR. Rather they have two partitions of which one is meant to be invisible to EFI ("Empty") and
    one is advertised as EFI partition:

    $ /sbin/fdisk -l debian-12.2.0-amd64-netinst.iso
    ...
    Disklabel type: dos
    ...
    Device Boot Start End Sectors Size Id Type debian-12.2.0-amd64-netinst.iso1 * 0 1286143 1286144 628M 0 Empty debian-12.2.0-amd64-netinst.iso2 4476 23451 18976 9.3M ef EFI (FAT-12

    So any system which boots this ISO from USB stick does not rely on
    the presence of a valid GPT.
    (The only particular example of GPT addiction i know of are old versions
    of OVMF, the EFI used by qemu, which wanted to see the GPT header block,
    even without Protective MBR.)

    This layout was invented by Matthew J. Garrett for Fedora and is still
    the most bootable of all possible weird ways to present boot stuff for
    legacy BIOS and EFI on USB stick in the same image. (There are mad legacy BIOSes which hate EFI's demand for no MBR boot flag. Several distros
    abandoned above layout in favor of plain MBR or plain GPT. The price
    is that they have to leave behind some of the existing machines.)


    GPT
    ├─EFI
    └─RAID
    └─LVM (of course)

    Now, thanks to you, I know I can do:

    GPT
    ┊ RAID
    └───┤
    ├─EFI
    └─LVM

    Ah. Now i understand how accidentially useful my technical nitpicking was.
    (A consequence of me playing Dr. Pol with the arm in the ISO 9660 cow
    up to my shoulder.)


    Have a nice day :)

    Thomas

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Felix Miata@21:1/5 to All on Thu Jan 25 00:10:01 2024
    Nicolas George composed on 2024-01-24 20:50 (UTC+0100):

    Felix Miata composed:

    Technically, quite true. However, OS and user data are very different. User data
    recreation and/or restoration can be as painful as impossible, justifying RAID. OS
    can be reinstalled rather easily in a nominal amount of time. A 120G SSD can hold
    multiple OS installations quite easily. A spare 120G SSD costs less than a petrol
    fillup. I stopped putting OS on RAID when I got my first SSD. My current primary
    PC has 5 18G OS installations, all bootable much more quickly than finding a >> suitable USB stick to rescue boot from.

    Looks you are confusing RAID with backups. Yes, OS can be reinstalled,
    but that still makes “a nominal amount of time” during which your computer is not available.

    Your “spare” SSD would be more usefully used in a RAID array than corroding on your shelves.

    1: My spare SSD is part of my KISS configuration and backup protocols. Several minutes or even hours of downtime don't bother me. Its (actually, their) existence
    enables (a) second PC virtual twin where upgrades and experiments are better evaluated. I still have doubts about how much to trust SSD technology. Their failure rate in less than 3 years since purchase here has been seriously disappointing. 4 RMAs across 3 brands with a relative pittance of uptime each.

    2: I only use MD RAID1 with a single rotating rust pair, currently 1T each. A disposable 120M SSD wouldn't fit.

    I like the concept of spares. :)
    --
    Evolution as taught in public schools is, like religion,
    based on faith, not based on science.

    Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

    Felix Miata

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Woodall@21:1/5 to Nicolas George on Fri Jan 26 04:40:01 2024
    On Wed, 24 Jan 2024, Nicolas George wrote:

    It is rather ugly to have the same device be both a RAID with its
    superblock in the hole between GPT and first partition and the GPT in
    the hole before the RAID superblock, but it serves its purpose: the EFI partition is kept in sync over all devices.

    Until your UEFI bios writes to the disk before the system has booted.

    I'll be interested to hear how this goes and whether it's reliable.

    I tried it years ago, using a no-superblock raid and custom initrd
    (initramfs as I think it was then) to start it, but upgrades, and even
    kernel updates, became 'terrifying'. Now I use dd to copy the start of
    the disk...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Fri Jan 26 08:50:01 2024
    Tim Woodall (12024-01-26):
    Until your UEFI bios writes to the disk before the system has booted.

    Hi. Have you ever observed an UEFI firmware doing that? Without explicit
    admin instructions?

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Nicolas George on Fri Jan 26 09:40:02 2024
    Hi Nicolas,

    On Fri, Jan 26, 2024 at 08:49:06AM +0100, Nicolas George wrote:
    Tim Woodall (12024-01-26):
    Until your UEFI bios writes to the disk before the system has booted.

    Hi. Have you ever observed an UEFI firmware doing that? Without explicit admin instructions?

    Going back to my question from 2020 about what people do to provide
    redundancy for EFI System Partition, do I take it then that you
    have had no issues with just putting ESP in MD RAID-1?

    The "firmware may write to it" thing was raised as a concern by a
    few people,but always a theoretical one from what I could see.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Fri Jan 26 10:20:01 2024
    Andy Smith (12024-01-26):
    Going back to my question from 2020 about what people do to provide redundancy for EFI System Partition, do I take it then that you
    have had no issues with just putting ESP in MD RAID-1?

    I have not had the occasion to test since two days ago when Thomas's
    remarks made me realize it was possible.

    The "firmware may write to it" thing was raised as a concern by a
    few people,but always a theoretical one from what I could see.

    Now that I think a little more, this concern is not only unconfirmed,
    it is rather absurd. The firmware would never write in parts of the
    drive that might contain data.

    At worst, it is possible to cover the RAID header with a dummy
    partition:

    label: gpt
    unit: sectors
    table-length: 24
    sector-size: 512
    first-lba: 8
    1 : start=8, size=2040, type=00000000-0000-0000-0000-000000000000
    2 : start=2048, size=30712, type=lvm

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Fri Jan 26 10:30:02 2024
    Thomas Schmitt (12024-01-24):
    The Debian installation and live ISOs have MBR partitions with only a
    flimsy echo of GPT. There is a GPT header block and an entries array.
    But it does not get announced by a Protective MBR. Rather they have two partitions of which one is meant to be invisible to EFI ("Empty") and
    one is advertised as EFI partition:

    $ /sbin/fdisk -l debian-12.2.0-amd64-netinst.iso
    ...
    Disklabel type: dos
    ...
    Device Boot Start End Sectors Size Id Type debian-12.2.0-amd64-netinst.iso1 * 0 1286143 1286144 628M 0 Empty debian-12.2.0-amd64-netinst.iso2 4476 23451 18976 9.3M ef EFI (FAT-12

    So any system which boots this ISO from USB stick does not rely on
    the presence of a valid GPT.

    You seem to be assuming that the system will first check sector 0 to
    parse the MBR and then, if the MBR declares a GPT sector try to use the
    GPT.

    I think it is the other way around on modern systems: it will first
    check sector 1 for a GPT header, and only if it fails check sector 0. Or
    not check sector 0 at all if legacy mode has been removed.

    This layout was invented by Matthew J. Garrett for Fedora and is still
    the most bootable of all possible weird ways to present boot stuff for
    legacy BIOS and EFI on USB stick in the same image.

    I think I invented independently something similar.

    https://nsup.org/~george/comp/live_iso_usb/grub_hybrid.html

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Schmitt@21:1/5 to Nicolas George on Fri Jan 26 10:50:01 2024
    Hi,

    i hate to put in question the benefit of my proposal, but:

    Nicolas George wrote:
    The firmware would never write in parts of the
    drive that might contain data.

    See

    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1056998
    "cdrom: Installation media changes after booting it"

    Two occasions were shown in this bug where the EFI system partition of
    a Debian installation ISO on USB stick changed. One was caused by a
    Microsoft operating system, writing a file named WPSettings.dat. But the
    other was from Lenovo firmware writing /efi/Lenovo/BIOS/SelfHealing.fd .

    One may doubt that the success of these operations is desirable at all.
    The ISO was also tested with a not-anymore-writable DVD. In that case the Lenovo firmware did not raise protest over the fact that it was not
    possible to write to the EFI partition.


    Have a nice day :)

    Thomas

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Nicolas George on Fri Jan 26 11:00:02 2024
    Hello,

    On Fri, Jan 26, 2024 at 10:09:53AM +0100, Nicolas George wrote:
    Andy Smith (12024-01-26):
    The "firmware may write to it" thing was raised as a concern by a
    few people,but always a theoretical one from what I could see.

    Now that I think a little more, this concern is not only unconfirmed,
    it is rather absurd. The firmware would never write in parts of the
    drive that might contain data.

    I suppose my concern with that is that a firmware developer might
    feel justified in poking about in the ESP, which they might consider
    is there "for them".

    I have seen quite a few first hand reports of motherboard firmware
    that writes empty GPT when it sees a drive with no GPT, which I had
    previously considered unthinkable, so I do worry about trusting in
    the firmware developers.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Schmitt@21:1/5 to Nicolas George on Fri Jan 26 11:10:01 2024
    Hi,

    Nicolas George wrote:
    You seem to be assuming that the system will first check sector 0 to
    parse the MBR and then, if the MBR declares a GPT sector try to use the
    GPT.

    That's what the UEFI specs prescribe. GPT is defined by UEFI-2.8 in
    chapter 5 "GUID Partition Table (GPT) Disk Layout". Especially:

    5.2.3 Protective MBR
    For a bootable disk, a Protective MBR must be located at LBA 0 (i.e.,
    the first logical block) of the disk if it is using the GPT disk layout.
    The Protective MBR precedes the GUID Partition Table Header to maintain
    compatibility with existing tools that do not understand GPT partition
    structures.


    I think it is the other way around on modern systems: it will first
    check sector 1 for a GPT header, and only if it fails check sector 0.

    Given the creativity of firmware programmers, this is not impossible.
    At least the programmers of older versions of OVMF took the presence of
    a GPT header as reason to boot, whereas without it did not boot.
    Meanwhile this demand for GPT debris has vanished and OVMF boots from
    media with only MBR partitions, too.


    I wrote:
    This layout [used by Debian installation ISOs] was invented by Matthew
    J. Garrett for Fedora

    I think I invented independently something similar. https://www.normalesup.org/~george/comp/live_iso_usb/grub_hybrid.html

    Not to forget Vladimir Serbinenko who specified how a grub-mkrescue ISO
    shall present its lures for BIOS and EFI on optical media and USB stick.
    The ISO has a pure GPT partition table, where the ISO filesystem is not mountable as partition but only by the base device (like /dev/sdc) or
    possibly by its HFS+ directory tree via the Apple Partition Map, if
    present.

    (To create such an ISO, install grub-common, grub-efi-amd64, grub-efi-ia32,
    and grub-pc. Then run grub-mkrescue with some dummy directory as payload.
    The ISO will boot to a GRUB prompt.)


    Have a nice day :)

    Thomas

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Woodall@21:1/5 to Nicolas George on Fri Jan 26 14:20:01 2024
    On Fri, 26 Jan 2024, Nicolas George wrote:


    Now that I think a little more, this concern is not only unconfirmed,
    it is rather absurd. The firmware would never write in parts of the
    drive that might contain data.

    UEFI understands the EFI system filesystem so it can "safely" write new
    files there.

    The danger then is that a write via mdadm corrupts the filesystem. I'm
    not sure if mdadm will detect the inconsistent data or assume both
    sources are the same.

    Hardware raid that the bios cannot subvert is obviously one solution.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Woodall@21:1/5 to Tim Woodall on Fri Jan 26 14:30:01 2024
    On Fri, 26 Jan 2024, Tim Woodall wrote:

    On Fri, 26 Jan 2024, Nicolas George wrote:


    Now that I think a little more, this concern is not only unconfirmed,
    it is rather absurd. The firmware would never write in parts of the
    drive that might contain data.

    UEFI understands the EFI system filesystem so it can "safely" write new
    files there.

    The danger then is that a write via mdadm corrupts the filesystem. I'm
    not sure if mdadm will detect the inconsistent data or assume both
    sources are the same.

    Hardware raid that the bios cannot subvert is obviously one solution.


    https://stackoverflow.com/questions/32324109/can-i-write-on-my-local-filesystem-using-efi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Tim Woodall on Fri Jan 26 14:50:01 2024
    On 1/26/24 08:19, Tim Woodall wrote:
    On Fri, 26 Jan 2024, Nicolas George wrote:


    Now that I think a little more, this concern is not only unconfirmed,
    it is rather absurd. The firmware would never write in parts of the
    drive that might contain data.

    UEFI understands the EFI system filesystem so it can "safely" write new
    files there.

    The danger then is that a write via mdadm corrupts the filesystem. I'm
    not sure if mdadm will detect the inconsistent data or assume both
    sources are the same.

    Hardware raid that the bios cannot subvert is obviously one solution.

    Is nearly the only solution, but it needs to have a hard specified
    format that guarantees 100% compatibility across all makers or they
    cannot use the word raid in their advertising. I am sick of proprietary
    makers doing a job that subverts the method with full intentions of
    locking the customer to only their product. Let there be competition
    based on the quality of their product.
    .

    Cheers, Gene Heskett.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Tim Woodall on Fri Jan 26 15:20:01 2024
    Hello,

    On Fri, Jan 26, 2024 at 01:18:53PM +0000, Tim Woodall wrote:
    Hardware raid that the bios cannot subvert is obviously one solution.

    These days the different trade-offs for HW RAID are IMHO worse. I
    left it behind in 2014 and don't intend to go back. 😀

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to gene heskett on Fri Jan 26 15:40:01 2024
    Hello,

    On Fri, Jan 26, 2024 at 08:40:42AM -0500, gene heskett wrote:
    On 1/26/24 08:19, Tim Woodall wrote:
    Hardware raid that the bios cannot subvert is obviously one solution.

    Is nearly the only solution,

    If the problem to be solved is defined as redundancy for the ESP,
    there are a bunch of solutions as already discussed. All of them
    come with upsides and downsides. The downsides of hardware RAID for
    this, for me, are too big.

    [hardware RAID] needs to have a hard specified format that
    guarantees 100% compatibility across all makers

    If that happened, mdadm could support it, and then I would continue
    to use mdadm. In fact it already has happened, in that Intel came up
    with a standard for its "fake RAID" data layout and mdadm does
    support it already. But of course, none of the other vendors of
    hardware RAID took that on.

    https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/rst-linux-paper.pdf

    It has also been pointed out that there is no technical reason why
    EFI firmware can't support MD RAID, since MD is open source.

    But on the whole, we can't wait around for any of that to happen.

    full intentions of locking the customer to only their product.

    There was a time when hardware RAID was really the only game in
    town, and the ability it gave to lock in the customer was just the
    cost of doing business.

    That time has passed, but I don't think the UEFI firmware developers
    are interested in helping out.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Nicolas George on Fri Jan 26 17:00:01 2024
    On Wed, 2024-01-24 at 21:05 +0100, Nicolas George wrote:
    [...]
    GPT
    ├─EFI
    └─RAID
    └─LVM (of course)

    Now, thanks to you, I know I can do:

    GPT
    ┊ RAID
    └───┤
    ├─EFI
    └─LVM

    It is rather ugly to have the same device be both a RAID with its
    superblock in the hole between GPT and first partition and the GPT in
    the hole before the RAID superblock, but it serves its purpose: the EFI partition is kept in sync over all devices.

    It still requires setting the non-volatile variables, though.

    How do you make the BIOS read the EFI partition when it's on mdadm
    RAID?

    It seems you have to have an EFI partition directly, outside sofware
    RAID, on each storage device, and that indeed raises the question how
    you keep them up to date so you can still boot when a disk has failed.
    It's a nasty problem.

    I use hardware RAID to avoid this problem ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Fri Jan 26 17:00:01 2024
    hw (12024-01-26):
    How do you make the BIOS read the EFI partition when it's on mdadm
    RAID?

    I have not yet tested but my working hypothesis is that the firmware
    will just ignore the RAID and read the EFI partition: with the scheme I described, the GPT points to the EFI partition and the EFI partition
    just contains the data.

    Of course, it only works with RAID1, where the data on disk is the data
    in RAID.

    Regards,

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to All on Fri Jan 26 17:00:01 2024
    Hello,

    On Fri, Jan 26, 2024 at 04:50:00PM +0100, hw wrote:
    How do you make the BIOS read the EFI partition when it's on mdadm
    RAID?

    If MD superblock is at a part of device not used by filesystem (e.g.
    the end) and it is a RAID-1, each member device is indistinguishable
    from FAT filesystem without RAID for naive software in read-only
    mode. This is also how grub boots MD RAID-1 before Grub understood
    MD RAID.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to All on Sun Jan 28 17:50:01 2024
    Hi,

    On Sun, Jan 28, 2024 at 05:17:14PM +0100, hw wrote:
    Ok if Andy and you are right, you could reasonably boot machines with
    an UEFI BIOS when using mdadm RAID :)

    I've been doing it for more than two decades, though not with UEFI.

    How is btrfs going to deal with this problem when using RAID? Require hardware RAID?

    Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.

    ESP have to be vfat so why are you bringing up btrfs?

    If you want to use btrfs, use btrfs. UEFI firmware isn't going to
    care as long as your ESP is not inside that.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Nicolas George on Sun Jan 28 17:20:01 2024
    On Fri, 2024-01-26 at 16:57 +0100, Nicolas George wrote:
    hw (12024-01-26):
    How do you make the BIOS read the EFI partition when it's on mdadm
    RAID?

    I have not yet tested but my working hypothesis is that the firmware
    will just ignore the RAID and read the EFI partition: with the scheme I described, the GPT points to the EFI partition and the EFI partition
    just contains the data.

    Of course, it only works with RAID1, where the data on disk is the data
    in RAID.

    Ok if Andy and you are right, you could reasonably boot machines with
    an UEFI BIOS when using mdadm RAID :)

    How is btrfs going to deal with this problem when using RAID? Require
    hardware RAID?

    Having to add mdadm RAID to a setup that uses btrfs just to keep efi
    partitions in sync would suck.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Dan Ritter on Sun Jan 28 18:40:01 2024
    Hi,

    Keeping all this context because I don't actually see how the
    response matches the context and so I might have missed something…

    On Sun, Jan 28, 2024 at 11:54:05AM -0500, Dan Ritter wrote:
    hw wrote:
    How is btrfs going to deal with this problem when using RAID? Require hardware RAID?

    Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.

    You can add hooks to update-initramfs or update-grub.

    To a first approximation:

    firstbootpart = wwn-0x5006942feedbee1-part1
    extrabootparts = wwn-0x5004269deafbead-part1\
    wwn-0x5001234adefabe-part1 \
    wwn-0x5005432faebeeda-part1

    for eachpart in $extrabootparts ; \
    do cp /dev/disk/by-id/$firstbootpart /dev/disk/by-id/$eachpart; done

    I realise that the above is pseudocode, but I have some issues with
    it, namely:

    a) I don't see what this has to do with btrfs, the subject of the
    message you are replying to. Then again, I also did not see what
    btrfs had to do with the thing that IT was replying to, so
    possibly I am very confused.

    b) My best interpretation of your message is that it solves the "how
    to keep ESPs in sync" question, but if it is intended to do that
    then you may as well have just said "just keep the ESPs in sync",
    because what you wrote is literally something like:

    cp /dev/disk/by-id/wwn-0x5002538d425560a4-part1 /dev/disk/by-id/wwn-0x5002538d425560b5-part1

    which …is rather like a "now draw the rest of the owl" sort of
    response given that it doesn't literally work and most of the job
    is in reworking that line of pseudocode into something that will
    actually work.

    If someone DOES want a script option that solves that problem, a
    couple of actual working scripts were supplied in the link I gave to
    the earlier thread:

    https://lists.debian.org/debian-user/2020/11/msg00455.html
    https://lists.debian.org/debian-user/2020/11/msg00458.html

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Ritter@21:1/5 to All on Sun Jan 28 18:20:01 2024
    hw wrote:
    How is btrfs going to deal with this problem when using RAID? Require hardware RAID?

    Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.


    You can add hooks to update-initramfs or update-grub.

    To a first approximation:

    firstbootpart = wwn-0x5006942feedbee1-part1
    extrabootparts = wwn-0x5004269deafbead-part1\
    wwn-0x5001234adefabe-part1 \
    wwn-0x5005432faebeeda-part1

    for eachpart in $extrabootparts ; \
    do cp /dev/disk/by-id/$firstbootpart /dev/disk/by-id/$eachpart; done

    You'll need to provide suitable values for the partitions, and
    remember to fix this when you change disks for any reason.

    And test it, because I have not even run it once.

    -dsr-

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Andy Smith on Sun Jan 28 21:10:01 2024
    On Sun, 2024-01-28 at 16:46 +0000, Andy Smith wrote:
    Hi,

    On Sun, Jan 28, 2024 at 05:17:14PM +0100, hw wrote:
    Ok if Andy and you are right, you could reasonably boot machines with
    an UEFI BIOS when using mdadm RAID :)

    I've been doing it for more than two decades, though not with UEFI.

    How is btrfs going to deal with this problem when using RAID? Require hardware RAID?

    Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.

    ESP have to be vfat so why are you bringing up btrfs?

    If you want to use btrfs, use btrfs. UEFI firmware isn't going to
    care as long as your ESP is not inside that.

    It's easy to boot from btrfs software RAID without further ado. These
    nasty and annoying UEFI partitions get in the way of that since they
    are not kept in sync with each other when you have several without
    further ado.

    That easily leads to situations in which you can't boot after a disk
    has failed despite you have RAID. That is something that must not
    happen; it defeats the RAID. It's bad enough when you have access to
    the machine and it's a total nightmare when not because you'll have to
    somehow go there to fix this. If the disk having the UEFI partition
    has failed and there's no redundance that's at least sufficently in
    sync, it's even worse.

    Show me any installer for Linux distributions that handles this
    sufficently without further ado.

    When you don't use btrfs, you have either hardware RAID or
    mdraid. With harware RAID, the problem doesn't come up. With mdadm
    RAID, it isn't much better than with btrfs since without further ado,
    you still don't have redundant UEFI partitions. With btrfs and mdadm
    RAID, it's basically worse because you have to deploy another variant
    of software RAID in addition to the software built into btrfs.

    So at least for boot disks, I'll go for hardware RAID whenever
    possible, especially with btrfs, until this problem is fixed. Or do
    you have a better option?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to All on Sun Jan 28 23:00:01 2024
    Hello,

    On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
    On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
    If someone DOES want a script option that solves that problem, a
    couple of actual working scripts were supplied in the link I gave to
    the earlier thread:

    https://lists.debian.org/debian-user/2020/11/msg00455.html
    https://lists.debian.org/debian-user/2020/11/msg00458.html

    Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
    in sync without extra scripts needed?

    Could you read the first link above.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Andy Smith on Mon Jan 29 17:30:01 2024
    On Sun, 2024-01-28 at 21:55 +0000, Andy Smith wrote:
    Hello,

    On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
    On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
    If someone DOES want a script option that solves that problem, a
    couple of actual working scripts were supplied in the link I gave to
    the earlier thread:

    https://lists.debian.org/debian-user/2020/11/msg00455.html
    https://lists.debian.org/debian-user/2020/11/msg00458.html

    Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
    in sync without extra scripts needed?

    Could you read the first link above.

    I did, and it doesn't explain why you would need a bunch of scripts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Franco Martelli on Mon Jan 29 18:00:01 2024
    On Mon, 2024-01-29 at 14:45 +0100, Franco Martelli wrote:
    On 28/01/24 at 17:17, hw wrote:
    On Fri, 2024-01-26 at 16:57 +0100, Nicolas George wrote:
    hw (12024-01-26):
    How do you make the BIOS read the EFI partition when it's on mdadm RAID?

    I have not yet tested but my working hypothesis is that the firmware
    will just ignore the RAID and read the EFI partition: with the scheme I described, the GPT points to the EFI partition and the EFI partition
    just contains the data.

    Of course, it only works with RAID1, where the data on disk is the data in RAID.

    Ok if Andy and you are right, you could reasonably boot machines with
    an UEFI BIOS when using mdadm RAID :)

    There is a sort of HOWTO [1] published in the archLinux wiki [2] but I
    don't advise it because there are many things that could go wrong.

    Cheers,

    [1] https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/
    [2] https://wiki.archlinux.org/title/EFI_system_partition#ESP_on_software_RAID1

    Ok in that case, hardware RAID is a requirement for machines with UEFI
    BIOS since otherwise their reliability is insufficient.

    I didn't plan on using hardware RAID for my next server, and now
    things are getting way more complicated than they already are because
    I can't just keep using the disks from my current one :( Hmm ...

    But I'm glad that I looked into this.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Mon Jan 29 18:10:01 2024
    hw (12024-01-29):
    Ok in that case, hardware RAID is a requirement for machines with UEFI

    That is not true, you can still put the RAID in a partition and keep the
    boot partitions in sync manually or with scripts.

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to All on Mon Jan 29 18:50:01 2024
    On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:

    [...]

    Ok in that case, hardware RAID is a requirement for machines with UEFI
    BIOS since otherwise their reliability is insufficient.

    The price you pay for hardware RAID is that you need a compatible controller
    if you take your disks elsewhere (e.g. because your controller dies).

    With (Linux) software RAID you just need another Linux...

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCZbfjMgAKCRAFyCz1etHa RkOEAJ43deOktIm8gJHcwDxEpjrRJpGEUQCfaI4+rYWHYfVDlDxnYJdjxOw4U0g=
    =IUyU
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to All on Tue Jan 30 01:00:01 2024
    Hi,

    On Mon, Jan 29, 2024 at 05:28:56PM +0100, hw wrote:
    On Sun, 2024-01-28 at 21:55 +0000, Andy Smith wrote:
    On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
    On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
    If someone DOES want a script option that solves that problem, a
    couple of actual working scripts were supplied in the link I gave to the earlier thread:

    https://lists.debian.org/debian-user/2020/11/msg00455.html
    https://lists.debian.org/debian-user/2020/11/msg00458.html

    Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
    in sync without extra scripts needed?

    Could you read the first link above.

    I did, and it doesn't explain why you would need a bunch of scripts.

    I think you should read it again until you find the part where it
    clearly states what the problem is with using MD RAID for this. If
    you still can't find that part, there is likely to be a problem I
    can't assist with.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Tue Jan 30 21:40:01 2024
    hw (12024-01-30):
    Yes, and how much effort and how reliable is doing that?

    Very little effort and probably more reliable than hardware RAID with closed-source hardware.

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Nicolas George on Tue Jan 30 21:40:01 2024
    On Mon, 2024-01-29 at 18:00 +0100, Nicolas George wrote:
    hw (12024-01-29):
    Ok in that case, hardware RAID is a requirement for machines with UEFI

    That is not true, you can still put the RAID in a partition and keep the
    boot partitions in sync manually or with scripts.

    Yes, and how much effort and how reliable is doing that?

    I didn't say it can't be done.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to tomas@tuxteam.de on Tue Jan 30 21:50:01 2024
    On Mon, 2024-01-29 at 18:41 +0100, tomas@tuxteam.de wrote:
    On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:

    [...]

    Ok in that case, hardware RAID is a requirement for machines with UEFI
    BIOS since otherwise their reliability is insufficient.

    The price you pay for hardware RAID is that you need a compatible controller if you take your disks elsewhere (e.g. because your controller dies).

    How often do you take the system disks from one machine to another,
    and how often will the RAID controller fail?

    With (Linux) software RAID you just need another Linux...

    How's that supposed to help? The machine still won't boot if the disk
    with the UEFI partition has failed. Look at Linux installers, like
    the Debian installer or the Fedora installer. Last time I used
    either, none of them would automatically create or at least require
    redundant UEFI partitions --- at least for instances when software
    RAID is used --- to make it possible to boot when a disk has failed.
    It's a very bad oversight.

    Maybe the problem needs to be fixed in all the UEFI BIOSs. I don't
    think it'll happen, though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Andy Smith on Tue Jan 30 22:00:01 2024
    On Mon, 2024-01-29 at 23:53 +0000, Andy Smith wrote:
    Hi,

    On Mon, Jan 29, 2024 at 05:28:56PM +0100, hw wrote:
    On Sun, 2024-01-28 at 21:55 +0000, Andy Smith wrote:
    On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
    On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
    If someone DOES want a script option that solves that problem, a couple of actual working scripts were supplied in the link I gave to the earlier thread:

    https://lists.debian.org/debian-user/2020/11/msg00455.html
    https://lists.debian.org/debian-user/2020/11/msg00458.html

    Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions in sync without extra scripts needed?

    Could you read the first link above.

    I did, and it doesn't explain why you would need a bunch of scripts.

    I think you should read it again until you find the part where it
    clearly states what the problem is with using MD RAID for this. If
    you still can't find that part, there is likely to be a problem I
    can't assist with.

    That there may be a problem doesn't automatically mean that you need a
    bunch of scripts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tomas@tuxteam.de@21:1/5 to All on Wed Jan 31 06:40:01 2024
    On Tue, Jan 30, 2024 at 09:47:35PM +0100, hw wrote:
    On Mon, 2024-01-29 at 18:41 +0100, tomas@tuxteam.de wrote:
    On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:

    [...]

    Ok in that case, hardware RAID is a requirement for machines with UEFI BIOS since otherwise their reliability is insufficient.

    The price you pay for hardware RAID is that you need a compatible controller
    if you take your disks elsewhere (e.g. because your controller dies).

    How often do you take the system disks from one machine to another,
    and how often will the RAID controller fail?

    With (Linux) software RAID you just need another Linux...

    How's that supposed to help? The machine still won't boot if the disk
    with the UEFI partition has failed.

    We are talking about getting out of a catastrophic event. In such cases, booting is the smallest of problems: use your favourite rescue medium
    with a kernel which understands your RAID (and possibly other details
    of your storage setup, file systems, LUKS, whatever).

    [...]

    Maybe the problem needs to be fixed in all the UEFI BIOSs. I don't
    think it'll happen, though.

    This still makes sense if you want a hands-off recovery (think data
    centre far away). Still you won't recover from a broken motherboard.

    Cheers
    --
    t

    -----BEGIN PGP SIGNATURE-----

    iF0EABECAB0WIQRp53liolZD6iXhAoIFyCz1etHaRgUCZbnbrgAKCRAFyCz1etHa Rv2BAJ9UWXOeKDPrHv8436IN0njjldXUWgCfebLywUYXExvYLVTK8+5TMZRT+HQ=
    =Qa7n
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to All on Wed Jan 31 16:20:01 2024
    Hi,

    On Tue, Jan 30, 2024 at 09:50:23PM +0100, hw wrote:
    On Mon, 2024-01-29 at 23:53 +0000, Andy Smith wrote:
    I think you should read it again until you find the part where it
    clearly states what the problem is with using MD RAID for this. If
    you still can't find that part, there is likely to be a problem I
    can't assist with.

    That there may be a problem doesn't automatically mean that you need a
    bunch of scripts.

    This is getting quite tedious.

    Multiple people have said that there is a concern that UEFI firmware
    might write to an ESP, which would invalidate the use of software
    RAID for the ESP.

    Multiple people have suggested instead syncing ESP partitions in
    userland. If you're going to do that then you'll need a script to do
    it.

    I don't understand what you find so difficult to grasp about this.
    If it's that you have some other proposal for solving this, it would
    be helpful for you to say so, instead of just repeating "why do you
    need scripts, you don't need scripts", because if you just repeat
    that, all I can do is repeat what I've already said until I become
    bored and stop.

    If your suggested solution is "use hardware RAID", no need to repeat
    that one though: I see you said it in a few other messages, and that suggestions has been received. Assume the conversation continues
    amongst people who don't like that suggestion.

    Otherwise, I don't think anyone knows what you have spent several
    messages trying to say. All we got was, "you don't need scripts".

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Andy Smith on Wed Jan 31 23:00:01 2024
    On Wed, 2024-01-31 at 15:16 +0000, Andy Smith wrote:
    Hi,

    On Tue, Jan 30, 2024 at 09:50:23PM +0100, hw wrote:
    On Mon, 2024-01-29 at 23:53 +0000, Andy Smith wrote:
    I think you should read it again until you find the part where it
    clearly states what the problem is with using MD RAID for this. If
    you still can't find that part, there is likely to be a problem I
    can't assist with.

    That there may be a problem doesn't automatically mean that you need a bunch of scripts.

    This is getting quite tedious.

    Multiple people have said that there is a concern that UEFI firmware
    might write to an ESP, which would invalidate the use of software
    RAID for the ESP.

    Multiple people have suggested instead syncing ESP partitions in
    userland. If you're going to do that then you'll need a script to do
    it.

    I don't understand what you find so difficult to grasp about this.

    You kept only saying 'read the link'. Well, read the link! It points
    out 4 choices and none of them says 'you need a bunch of scripts'.

    If it's that you have some other proposal for solving this, it would
    be helpful for you to say so

    I already said my solution is using hardware raid or fixing the
    problem in all the UEFI BIOSs.

    I'd also say that the BIOS must never write to the storage, be it to
    an UEFI partition or anywhere else. But that's a different topic.

    [...]
    If your suggested solution is "use hardware RAID", no need to repeat
    that one though: I see you said it in a few other messages, and that suggestions has been received. Assume the conversation continues
    amongst people who don't like that suggestion.

    Well, too late, I already said it again since you asked. Do you have
    a better solution? It's ok not to like this solution, but do you have
    a better one?

    Otherwise, I don't think anyone knows what you have spent several
    messages trying to say. All we got was, "you don't need scripts".

    What do you expect when you keep repeating 'read the link'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Nicolas George on Wed Jan 31 23:30:01 2024
    On Tue, 2024-01-30 at 21:35 +0100, Nicolas George wrote:
    hw (12024-01-30):
    Yes, and how much effort and how reliable is doing that?

    Very little effort and probably more reliable than hardware RAID with closed-source hardware.

    Well, I doubt it. After all you need to copy a whole partition and
    must make sure that it doesn't fail through distribution upgrades and
    all kinds of possible changes and even when someone shuts down or
    reboots the computer or pulls the plug while the copying is still in
    progress. You also must make sure that the boot manager is installed
    on multiple disks.

    And when you're going to do it? When shutting the machine down?
    Might not happen and when it does happen, maybe you don't want to wait
    on it.

    When rebooting it? Perhaps you don't want to overwrite the copy at
    that time, or perhaps it's too late then because there were software
    updates before you rebooted and one of the disks failed when
    rebooting.

    Do you suggest to install backup batteries or capacitors to keep the
    machine running until the copying process has completed when the power
    goes out?

    Or do you want to do it all manually at a time convenient for you?
    What if you forgot to do it?


    I'm not so silly that you could convince me that you can do it more
    reliably than the hardware RAID does it whith a bunch of scripts you
    put together yourself, especially not by implying that the hardware
    raid which has been tested in many datacenters with who knows how many
    hundreds of thousands of machines over many years uses closed source
    software which has been maintained and therefore must be unreliable.

    The lowest listed MTBF of hardware RAID is over 260000 hours
    (i. e. about 30 years) on [1]. Can you show that you can do it more
    reliably with your bunch of scripts?


    [1]: https://www.intel.com/content/www/us/en/support/articles/000007641/server-products/sasraid.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to tomas@tuxteam.de on Wed Jan 31 23:50:01 2024
    On Wed, 2024-01-31 at 06:33 +0100, tomas@tuxteam.de wrote:
    On Tue, Jan 30, 2024 at 09:47:35PM +0100, hw wrote:
    On Mon, 2024-01-29 at 18:41 +0100, tomas@tuxteam.de wrote:
    On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:

    [...]

    Ok in that case, hardware RAID is a requirement for machines with UEFI BIOS since otherwise their reliability is insufficient.

    The price you pay for hardware RAID is that you need a compatible controller
    if you take your disks elsewhere (e.g. because your controller dies).

    How often do you take the system disks from one machine to another,
    and how often will the RAID controller fail?

    With (Linux) software RAID you just need another Linux...

    How's that supposed to help? The machine still won't boot if the disk
    with the UEFI partition has failed.

    We are talking about getting out of a catastrophic event. In such cases, booting is the smallest of problems: use your favourite rescue medium
    with a kernel which understands your RAID (and possibly other details
    of your storage setup, file systems, LUKS, whatever).

    Try to do that with a remote machine.

    [...]

    Maybe the problem needs to be fixed in all the UEFI BIOSs. I don't
    think it'll happen, though.

    This still makes sense if you want a hands-off recovery (think data
    centre far away). Still you won't recover from a broken motherboard.

    It would make sense that all the UEFI BIOSs would be fixed so that
    they do not create this problem in the first place like they
    shouldn't.

    You seem to forget the point that one reason for using redundant
    storage, like some kind of RAID, to boot from, is that I don't want to
    have booting issues, especially not with remote machines.

    Unfortunately UEFI BIOSs make that difficult unless you use hardware
    raid.

    And I don't want to have that problem with local machines either
    because it's a really nasty problem. How do you even restore the UEFI partition when the disk it's on has failed and you don't have a copy?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas George@21:1/5 to All on Wed Jan 31 23:30:01 2024
    hw (12024-01-31):
    Well, I doubt it.

    Well, doubt it all you want. In the meantime, we will continue to use
    it.

    Did not read the rest, not interested in red herring nightmare
    scenarios.

    --
    Nicolas George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Nicolas George on Fri Feb 2 04:50:01 2024
    On Wed, 2024-01-31 at 23:28 +0100, Nicolas George wrote:
    hw (12024-01-31):
    Well, I doubt it.

    Well, doubt it all you want. In the meantime, we will continue to use
    it.

    Did not read the rest, not interested in red herring nightmare
    scenarios.


    You'll figure it out eventually. Meanwhile, you may be happier by unscrubscribing from all mailing lists since you're not interested in
    what people have to say. In any case, I'm filtering all emails from
    you right into trash folder now.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From hw@21:1/5 to Franco Martelli on Fri Feb 2 20:00:01 2024
    On Fri, 2024-02-02 at 14:41 +0100, Franco Martelli wrote:
    On 31/01/24 at 22:51, hw wrote:
    [...]
    If your suggested solution is "use hardware RAID", no need to repeat
    that one though: I see you said it in a few other messages, and that suggestions has been received. Assume the conversation continues
    amongst people who don't like that suggestion.

    Well, too late, I already said it again since you asked. Do you have
    a better solution? It's ok not to like this solution, but do you have
    a better one?

    There is an alternative to hardware RAID if you want a Linux RAID: you
    can disable UEFI in the BIOS and delete the ESP as I did when I bought
    my gaming PC several years ago.

    I created my software RAID level 5 using debian-installer and it works perfectly without ESP, you have to choose "Expert install" in "Advanced options". I installed Bookworm when it was released in this way.

    Right, I forgot about that. Is that always an option?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Smith@21:1/5 to Franco Martelli on Sat Feb 3 19:30:01 2024
    Hi,

    On Fri, Feb 02, 2024 at 02:41:38PM +0100, Franco Martelli wrote:
    There is an alternative to hardware RAID if you want a Linux RAID: you can disable UEFI in the BIOS and delete the ESP as I did when I bought my gaming PC several years ago.

    I have storage devices which legacy BIOS cannot see for booting
    purposes. In past years these would require an "option ROM", Today,
    they require UEFI firmware. They aren't exotic devices; just
    enterprise NVMe.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)