Since mdadm can only put its superblock at the end of the device (1.0),
at the beginning of the device (1.1) and 4 Ko from the beginning (1.2),
but they still have not invented 1.3 to have the metadata 17 Ko from the beginning or the end, which would be necessary to be compatible with
GPT,
we have to partition them and put the EFI system partition outside
them.
Which leads me to wonder if there is an automated way to install GRUB
on all the EFI partitions.
Perhaps a script based on:
Although I found it simpler (and faster) to have all my system stuff on
an SSD, and the RAID on four HDDs. Grub goes on the SSD and that's that.
Charles Curley (12024-01-24):
Although I found it simpler (and faster) to have all my system stuff on
an SSD, and the RAID on four HDDs. Grub goes on the SSD and that's that.
If the SSD dies, your system does not boot. Somewhat wasting the benefit
of RAID.
If I run "grub-install" with multiple device I got
# LCALL=C grub-install /dev/sd[a-d]
grub-install: error: More than one install device?.
maybe it is a deprecated action for grub to install to multiple device, so this should it be investigated?
i cannot make qualified proposals for the GRUB question, but stumble over your technical statements.
Although it would be unusually small, it is possible to have a GPT of
only 4 KiB of size:
- 512 bytes for Protective MBR (the magic number of GPT)
- 512 bytes for the GPT header block
- 3 KiB for an array of 24 partition entries.
Question is of course, whether any partition editor is willing to create
such a small GPT. The internet says that sfdisk has "table-length" among
its input "Header lines". So it would be a matter of learning and experimenting.
we have to partition them and put the EFI system partition outsideDo you mean you partition them DOS-style ?
them.
Since mdadm can only put its superblock at the end of the device (1.0),
at the beginning of the device (1.1) and 4 Ko from the beginning (1.2),
but they still have not invented 1.3 to have the metadata 17 Ko from the beginning or the end, which would be necessary to be compatible with
GPT, we have to partition them and put the EFI system partition outside
them.
Which leads me to wonder if there is an automated way to install GRUB on
all the EFI partitions.
Interesting. Indeed, “table-length: 4” causes sfdisk to only write 3 sectors at the beginning and 2 at the end. I checked it really does not
write elsewhere.
That makes it possible to use full-disk RAID on a UEFI boot drive. Very
good news.
More and more firmwares will only boot with GPT. I think I met
only once a firmware that booted UEFI, 32 bits, with a MBR
GPT
├─EFI
└─RAID
└─LVM (of course)
Now, thanks to you, I know I can do:
GPT
┊ RAID
└───┤
├─EFI
└─LVM
Felix Miata composed:
Technically, quite true. However, OS and user data are very different. User data
recreation and/or restoration can be as painful as impossible, justifying RAID. OS
can be reinstalled rather easily in a nominal amount of time. A 120G SSD can hold
multiple OS installations quite easily. A spare 120G SSD costs less than a petrol
fillup. I stopped putting OS on RAID when I got my first SSD. My current primary
PC has 5 18G OS installations, all bootable much more quickly than finding a >> suitable USB stick to rescue boot from.
Looks you are confusing RAID with backups. Yes, OS can be reinstalled,
but that still makes “a nominal amount of time” during which your computer is not available.
Your “spare” SSD would be more usefully used in a RAID array than corroding on your shelves.
It is rather ugly to have the same device be both a RAID with its
superblock in the hole between GPT and first partition and the GPT in
the hole before the RAID superblock, but it serves its purpose: the EFI partition is kept in sync over all devices.
Until your UEFI bios writes to the disk before the system has booted.
Tim Woodall (12024-01-26):
Until your UEFI bios writes to the disk before the system has booted.
Hi. Have you ever observed an UEFI firmware doing that? Without explicit admin instructions?
Going back to my question from 2020 about what people do to provide redundancy for EFI System Partition, do I take it then that you
have had no issues with just putting ESP in MD RAID-1?
The "firmware may write to it" thing was raised as a concern by a
few people,but always a theoretical one from what I could see.
The Debian installation and live ISOs have MBR partitions with only a
flimsy echo of GPT. There is a GPT header block and an entries array.
But it does not get announced by a Protective MBR. Rather they have two partitions of which one is meant to be invisible to EFI ("Empty") and
one is advertised as EFI partition:
$ /sbin/fdisk -l debian-12.2.0-amd64-netinst.iso
...
Disklabel type: dos
...
Device Boot Start End Sectors Size Id Type debian-12.2.0-amd64-netinst.iso1 * 0 1286143 1286144 628M 0 Empty debian-12.2.0-amd64-netinst.iso2 4476 23451 18976 9.3M ef EFI (FAT-12
So any system which boots this ISO from USB stick does not rely on
the presence of a valid GPT.
This layout was invented by Matthew J. Garrett for Fedora and is still
the most bootable of all possible weird ways to present boot stuff for
legacy BIOS and EFI on USB stick in the same image.
The firmware would never write in parts of the
drive that might contain data.
Andy Smith (12024-01-26):
The "firmware may write to it" thing was raised as a concern by a
few people,but always a theoretical one from what I could see.
Now that I think a little more, this concern is not only unconfirmed,
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.
You seem to be assuming that the system will first check sector 0 to
parse the MBR and then, if the MBR declares a GPT sector try to use the
GPT.
I think it is the other way around on modern systems: it will first
check sector 1 for a GPT header, and only if it fails check sector 0.
This layout [used by Debian installation ISOs] was invented by Matthew
J. Garrett for Fedora
I think I invented independently something similar. https://www.normalesup.org/~george/comp/live_iso_usb/grub_hybrid.html
Now that I think a little more, this concern is not only unconfirmed,
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.
On Fri, 26 Jan 2024, Nicolas George wrote:
Now that I think a little more, this concern is not only unconfirmed,UEFI understands the EFI system filesystem so it can "safely" write new
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.
files there.
The danger then is that a write via mdadm corrupts the filesystem. I'm
not sure if mdadm will detect the inconsistent data or assume both
sources are the same.
Hardware raid that the bios cannot subvert is obviously one solution.
On Fri, 26 Jan 2024, Nicolas George wrote:
Now that I think a little more, this concern is not only unconfirmed,UEFI understands the EFI system filesystem so it can "safely" write new
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.
files there.
The danger then is that a write via mdadm corrupts the filesystem. I'm
not sure if mdadm will detect the inconsistent data or assume both
sources are the same.
Hardware raid that the bios cannot subvert is obviously one solution.
.
Hardware raid that the bios cannot subvert is obviously one solution.
On 1/26/24 08:19, Tim Woodall wrote:
Hardware raid that the bios cannot subvert is obviously one solution.
Is nearly the only solution,
[hardware RAID] needs to have a hard specified format that
guarantees 100% compatibility across all makers
full intentions of locking the customer to only their product.
[...]
GPT
├─EFI
└─RAID
└─LVM (of course)
Now, thanks to you, I know I can do:
GPT
┊ RAID
└───┤
├─EFI
└─LVM
It is rather ugly to have the same device be both a RAID with its
superblock in the hole between GPT and first partition and the GPT in
the hole before the RAID superblock, but it serves its purpose: the EFI partition is kept in sync over all devices.
It still requires setting the non-volatile variables, though.
How do you make the BIOS read the EFI partition when it's on mdadm
RAID?
How do you make the BIOS read the EFI partition when it's on mdadm
RAID?
Ok if Andy and you are right, you could reasonably boot machines with
an UEFI BIOS when using mdadm RAID :)
How is btrfs going to deal with this problem when using RAID? Require hardware RAID?
Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.
hw (12024-01-26):
How do you make the BIOS read the EFI partition when it's on mdadm
RAID?
I have not yet tested but my working hypothesis is that the firmware
will just ignore the RAID and read the EFI partition: with the scheme I described, the GPT points to the EFI partition and the EFI partition
just contains the data.
Of course, it only works with RAID1, where the data on disk is the data
in RAID.
hw wrote:
How is btrfs going to deal with this problem when using RAID? Require hardware RAID?
Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.
You can add hooks to update-initramfs or update-grub.
To a first approximation:
firstbootpart = wwn-0x5006942feedbee1-part1
extrabootparts = wwn-0x5004269deafbead-part1\
wwn-0x5001234adefabe-part1 \
wwn-0x5005432faebeeda-part1
for eachpart in $extrabootparts ; \
do cp /dev/disk/by-id/$firstbootpart /dev/disk/by-id/$eachpart; done
How is btrfs going to deal with this problem when using RAID? Require hardware RAID?
Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.
Hi,
On Sun, Jan 28, 2024 at 05:17:14PM +0100, hw wrote:
Ok if Andy and you are right, you could reasonably boot machines with
an UEFI BIOS when using mdadm RAID :)
I've been doing it for more than two decades, though not with UEFI.
How is btrfs going to deal with this problem when using RAID? Require hardware RAID?
Having to add mdadm RAID to a setup that uses btrfs just to keep efi partitions in sync would suck.
ESP have to be vfat so why are you bringing up btrfs?
If you want to use btrfs, use btrfs. UEFI firmware isn't going to
care as long as your ESP is not inside that.
On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
If someone DOES want a script option that solves that problem, a
couple of actual working scripts were supplied in the link I gave to
the earlier thread:
https://lists.debian.org/debian-user/2020/11/msg00455.html
https://lists.debian.org/debian-user/2020/11/msg00458.html
Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
in sync without extra scripts needed?
Hello,
On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
If someone DOES want a script option that solves that problem, a
couple of actual working scripts were supplied in the link I gave to
the earlier thread:
https://lists.debian.org/debian-user/2020/11/msg00455.html
https://lists.debian.org/debian-user/2020/11/msg00458.html
Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
in sync without extra scripts needed?
Could you read the first link above.
On 28/01/24 at 17:17, hw wrote:
On Fri, 2024-01-26 at 16:57 +0100, Nicolas George wrote:
hw (12024-01-26):
How do you make the BIOS read the EFI partition when it's on mdadm RAID?
I have not yet tested but my working hypothesis is that the firmware
will just ignore the RAID and read the EFI partition: with the scheme I described, the GPT points to the EFI partition and the EFI partition
just contains the data.
Of course, it only works with RAID1, where the data on disk is the data in RAID.
Ok if Andy and you are right, you could reasonably boot machines with
an UEFI BIOS when using mdadm RAID :)
There is a sort of HOWTO [1] published in the archLinux wiki [2] but I
don't advise it because there are many things that could go wrong.
Cheers,
[1] https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/
[2] https://wiki.archlinux.org/title/EFI_system_partition#ESP_on_software_RAID1
Ok in that case, hardware RAID is a requirement for machines with UEFI
Ok in that case, hardware RAID is a requirement for machines with UEFI
BIOS since otherwise their reliability is insufficient.
On Sun, 2024-01-28 at 21:55 +0000, Andy Smith wrote:
On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
If someone DOES want a script option that solves that problem, a
couple of actual working scripts were supplied in the link I gave to the earlier thread:
https://lists.debian.org/debian-user/2020/11/msg00455.html
https://lists.debian.org/debian-user/2020/11/msg00458.html
Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
in sync without extra scripts needed?
Could you read the first link above.
I did, and it doesn't explain why you would need a bunch of scripts.
Yes, and how much effort and how reliable is doing that?
hw (12024-01-29):
Ok in that case, hardware RAID is a requirement for machines with UEFI
That is not true, you can still put the RAID in a partition and keep the
boot partitions in sync manually or with scripts.
On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:
[...]
Ok in that case, hardware RAID is a requirement for machines with UEFI
BIOS since otherwise their reliability is insufficient.
The price you pay for hardware RAID is that you need a compatible controller if you take your disks elsewhere (e.g. because your controller dies).
With (Linux) software RAID you just need another Linux...
Hi,
On Mon, Jan 29, 2024 at 05:28:56PM +0100, hw wrote:
On Sun, 2024-01-28 at 21:55 +0000, Andy Smith wrote:
On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
On Sun, 2024-01-28 at 17:32 +0000, Andy Smith wrote:
If someone DOES want a script option that solves that problem, a couple of actual working scripts were supplied in the link I gave to the earlier thread:
https://lists.debian.org/debian-user/2020/11/msg00455.html
https://lists.debian.org/debian-user/2020/11/msg00458.html
Huh? Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions in sync without extra scripts needed?
Could you read the first link above.
I did, and it doesn't explain why you would need a bunch of scripts.
I think you should read it again until you find the part where it
clearly states what the problem is with using MD RAID for this. If
you still can't find that part, there is likely to be a problem I
can't assist with.
On Mon, 2024-01-29 at 18:41 +0100, tomas@tuxteam.de wrote:
On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:
[...]
Ok in that case, hardware RAID is a requirement for machines with UEFI BIOS since otherwise their reliability is insufficient.
The price you pay for hardware RAID is that you need a compatible controller
if you take your disks elsewhere (e.g. because your controller dies).
How often do you take the system disks from one machine to another,
and how often will the RAID controller fail?
With (Linux) software RAID you just need another Linux...
How's that supposed to help? The machine still won't boot if the disk
with the UEFI partition has failed.
Maybe the problem needs to be fixed in all the UEFI BIOSs. I don't
think it'll happen, though.
On Mon, 2024-01-29 at 23:53 +0000, Andy Smith wrote:
I think you should read it again until you find the part where it
clearly states what the problem is with using MD RAID for this. If
you still can't find that part, there is likely to be a problem I
can't assist with.
That there may be a problem doesn't automatically mean that you need a
bunch of scripts.
Hi,
On Tue, Jan 30, 2024 at 09:50:23PM +0100, hw wrote:
On Mon, 2024-01-29 at 23:53 +0000, Andy Smith wrote:
I think you should read it again until you find the part where it
clearly states what the problem is with using MD RAID for this. If
you still can't find that part, there is likely to be a problem I
can't assist with.
That there may be a problem doesn't automatically mean that you need a bunch of scripts.
This is getting quite tedious.
Multiple people have said that there is a concern that UEFI firmware
might write to an ESP, which would invalidate the use of software
RAID for the ESP.
Multiple people have suggested instead syncing ESP partitions in
userland. If you're going to do that then you'll need a script to do
it.
I don't understand what you find so difficult to grasp about this.
If it's that you have some other proposal for solving this, it would
be helpful for you to say so
[...]
If your suggested solution is "use hardware RAID", no need to repeat
that one though: I see you said it in a few other messages, and that suggestions has been received. Assume the conversation continues
amongst people who don't like that suggestion.
Otherwise, I don't think anyone knows what you have spent several
messages trying to say. All we got was, "you don't need scripts".
hw (12024-01-30):
Yes, and how much effort and how reliable is doing that?
Very little effort and probably more reliable than hardware RAID with closed-source hardware.
On Tue, Jan 30, 2024 at 09:47:35PM +0100, hw wrote:
On Mon, 2024-01-29 at 18:41 +0100, tomas@tuxteam.de wrote:
On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:
[...]
Ok in that case, hardware RAID is a requirement for machines with UEFI BIOS since otherwise their reliability is insufficient.
The price you pay for hardware RAID is that you need a compatible controller
if you take your disks elsewhere (e.g. because your controller dies).
How often do you take the system disks from one machine to another,
and how often will the RAID controller fail?
With (Linux) software RAID you just need another Linux...
How's that supposed to help? The machine still won't boot if the disk
with the UEFI partition has failed.
We are talking about getting out of a catastrophic event. In such cases, booting is the smallest of problems: use your favourite rescue medium
with a kernel which understands your RAID (and possibly other details
of your storage setup, file systems, LUKS, whatever).
[...]
Maybe the problem needs to be fixed in all the UEFI BIOSs. I don't
think it'll happen, though.
This still makes sense if you want a hands-off recovery (think data
centre far away). Still you won't recover from a broken motherboard.
Well, I doubt it.
hw (12024-01-31):
Well, I doubt it.
Well, doubt it all you want. In the meantime, we will continue to use
it.
Did not read the rest, not interested in red herring nightmare
scenarios.
On 31/01/24 at 22:51, hw wrote:
[...]
If your suggested solution is "use hardware RAID", no need to repeat
that one though: I see you said it in a few other messages, and that suggestions has been received. Assume the conversation continues
amongst people who don't like that suggestion.
Well, too late, I already said it again since you asked. Do you have
a better solution? It's ok not to like this solution, but do you have
a better one?
There is an alternative to hardware RAID if you want a Linux RAID: you
can disable UEFI in the BIOS and delete the ESP as I did when I bought
my gaming PC several years ago.
I created my software RAID level 5 using debian-installer and it works perfectly without ESP, you have to choose "Expert install" in "Advanced options". I installed Bookworm when it was released in this way.
There is an alternative to hardware RAID if you want a Linux RAID: you can disable UEFI in the BIOS and delete the ESP as I did when I bought my gaming PC several years ago.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (2 / 14) |
Uptime: | 29:06:48 |
Calls: | 6,669 |
Calls today: | 1 |
Files: | 12,216 |
Messages: | 5,338,028 |