All I could find was this:
http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276
For a program with so much documentation, GRUB seems sorely lacking in
this respect. It makes me glad I decided to keep /boot off my zpools.
Aren't new features only applied to newly created pools and datasets?
If /boot was compatible with GRUB it should still be. At least that's
how I read the einfo messages zfs spits out.
That is correct. I want to know when I can enable the new features.
Of course, it would also be nice to know which features must be
disabled when creating a new boot pool.
It is just frustrating that as far as I can tell, there is no
documentation about what features grub supports. It wouldn't be hard
for them to just say "here are the zpool features known to work in
grub version foo." The best I've found is the Arch wiki, which has
the disclaimer that it is probably out of date and to check the man
pages, but of course the man pages don't actually say anything.
and it isn't listed as compatible in> the source code you linked. However, grub-mount supposedly uses the> grub drivers and it has a command line option to provide an encryption> key. Maybe it is only compatible with the grub-mount command and not>For a program with so much documentation, GRUB seems sorely lacking in>> this respect. It makes me glad I decided to keep /boot off my zpools.> > Even this seems lacking. For example, encryption is not read-only> compatible (which seems obvious),
GRUB purposefully has lacking documentation, as they are not friendly
towards ZFS as a whole
and also because most people doing ZFS nowadays
do an EFISTUB setup with no GRUB, exactly to avoid these issues.
[1 <text/plain; US-ASCII (quoted-printable)>]
On Mon, 23 Aug 2021 14:02:29 -0400, Rich Freeman wrote:
Aren't new features only applied to newly created pools and datasets?
If /boot was compatible with GRUB it should still be. At least that's
how I read the einfo messages zfs spits out.
That is correct. I want to know when I can enable the new features.
Of course, it would also be nice to know which features must be
disabled when creating a new boot pool.
It is just frustrating that as far as I can tell, there is no
documentation about what features grub supports. It wouldn't be hard
for them to just say "here are the zpool features known to work in
grub version foo." The best I've found is the Arch wiki, which has
the disclaimer that it is probably out of date and to check the man
pages, but of course the man pages don't actually say anything.
All I could find was this:
http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276
For a program with so much documentation, GRUB seems sorely lacking in
this respect. It makes me glad I decided to keep /boot off my zpools.
So back to my original question, I downloaded -- after a lot of
trouble finding it -- the Ubuntu 21.04 live server as they call it,
but I cannot find any documentation as to how to use it as a rescue
disk -- seems to be just an install disk. Am I missing something
here?
Am Mo., 23. Aug. 2021 um 10:15 Uhr schrieb John Covici <covici@ccs.covici.com>:
Hi. I have been using 5.4 lts kernels for a while, but it seems I
need to change to 5.10 lts -- even Debian is now using 5.10, so it
seems time to do this.
Now, the problem is that I am using zfs and will not give it up, and
the version I have been using 0.8.6 is no longer supported in 5.10
versions of the kernel. So, I need a newer version of zfs and a
rescue cd in case I get into trouble. Sysresc seems to no longer be compatible withgentoo linux, so what is available? I could use gentoo catalyst to make something -- I have done that in the past, but its
quite a bit of work and I would prefer if there were something
available I could use out of the box.
Thanks in advance for any suggestions.
Hi,
LRS (Linux Recovery System) [1] may be an alternative. It is a
Gentoo-based live system with ZFS support and was (partly) inspired by
the move of SystemRescue to Arch.
Latest changelog states:
============ 20210820.17 ============
[X] Multiple updates and rebuilds applied
[X] Kernel updated to 5.10.60 + ZFS modules
It seems to be updated regularly and may be worth a try.
Regards,
Meik
[1] https://forums.gentoo.org/viewtopic-t-1106306.html
[1 <text/plain; US-ASCII (quoted-printable)>]
On Mon, 23 Aug 2021 16:57:11 -0400, John Covici wrote:
So back to my original question, I downloaded -- after a lot of
trouble finding it -- the Ubuntu 21.04 live server as they call it,
but I cannot find any documentation as to how to use it as a rescue
disk -- seems to be just an install disk. Am I missing something
here?
Use the desktop disc, I don't think the server one is a live environment
(it wouldn't make much sense for a server). Either use an xterm on the desktop or find the invocation to boot to a console, possibly adding "systemd.unit=multi-user.target" to the kernel options.
OK, I will check that out, thanks. It looks like that complete gentoo
system mentioned in the forums is looking decent, except its quite
large, I am checking that one also.
Hi. I have been using 5.4 lts kernels for a while, but it seems I
need to change to 5.10 lts -- even Debian is now using 5.10, so it
seems time to do this.
Now, the problem is that I am using zfs and will not give it up, and
the version I have been using 0.8.6 is no longer supported in 5.10
versions of the kernel. So, I need a newer version of zfs and a
rescue cd in case I get into trouble. Sysresc seems to no longer be compatible withgentoo linux, so what is available? I could use gentoo catalyst to make something -- I have done that in the past, but its
quite a bit of work and I would prefer if there were something
available I could use out of the box.
Thanks in advance for any suggestions.
Well, I never use a boot pool, I boot with ext4 and just do the root
on zfs. But, I was more interested in some external media, so I am
looking at that Linux Recovery file system to see if I can get that
working, or if there is a way to add zfs to the sysresc cd, I might do
that.
Hi John,
my approach is to have EFI partition with staticly compiled grub, alpine linux rescue system and kernel with tiny initramfs for zfs root.
It works really well, only thing you need to consider is when upgrading
root pool, you will not be able to boot to previous BE with old zfs.
Without upgrading the pool, the transition is just easy as recompiling
new kernel and upgrading the zfs userspace tools.
For the alpine I use script to put a new version on /boot https://github.com/robertek/root-scripts/blob/master/alpine_recovery_update
and having grub entry:
menuentry "Alpine linux recovery" {
linux /boot/vmlinuz-lts modules=loop,squashfs,sd-mod,nvme quiet nomodeset
initrd /boot/initramfs-lts
}
The alpine extended version contains zfs modules, so you only need to
"apk add zfs" and then modprobe zfs.
The extended version is little bit bigger, but I'm fine to live with 1G efi partition.
Robert.
On Monday, August 23, 2021 10:15:10 AM CEST John Covici wrote:
Hi. I have been using 5.4 lts kernels for a while, but it seems I
need to change to 5.10 lts -- even Debian is now using 5.10, so it
seems time to do this.
Now, the problem is that I am using zfs and will not give it up, and
the version I have been using 0.8.6 is no longer supported in 5.10
versions of the kernel. So, I need a newer version of zfs and a
rescue cd in case I get into trouble. Sysresc seems to no longer be compatible withgentoo linux, so what is available? I could use gentoo catalyst to make something -- I have done that in the past, but its
quite a bit of work and I would prefer if there were something
available I could use out of the box.
Thanks in advance for any suggestions.
or if there is a way to add zfs to the sysresc cd, I might do
that.
You can also add additional modules to sysrescd, so it may be easier to
stick with that and add ZFS to it.
Looking at https://www.system-rescue.org/Modules/ it looks like you just
install them after booting the live USB then run cowpacman2srm to put
everything you installed onto a file that you add to the USB stick. See
the section titled "Creating SRM modules out of pacman packages".
What I probably should get around to is grokking EFI+linux. I'm not
sure what the cleanest solution for that is these days - I've never
actually set up EFI on linux, mostly because I'm not sure what the
best practice is.
On Tue, 2021-08-31 at 10:04 -0400, Rich Freeman wrote:
What I probably should get around to is grokking EFI+linux. I'm not
sure what the cleanest solution for that is these days - I've never actually set up EFI on linux, mostly because I'm not sure what the
best practice is.
I can't speak to best practices, and I'm a long way from an expert on
EFI, but the the instructions on the Gentoo handbook pages on disk partitioning + grub setup worked as expected for me.
In a small nutshell, you have a small EFI+boot partition, set to type
'EFI System' and formatted FAT32, then tell grub to use it as an EFI directory when calling grub-install.
In a small nutshell, you have a small EFI+boot partition, set to type
'EFI System' and formatted FAT32, then tell grub to use it as an EFI directory when calling grub-install.
In simple(r) systems where you only boot the same OS you can instead use the kernel's EFI stub to get the UEFI firmware to load the latest OS kernel directly from the ESP, without a 3rd party boot manager:
https://wiki.gentoo.org/wiki/EFI_stub
You'll use the efibootmgr to manage the kernel images stored on ESP, or your UEFI configuration menu if it has this functionality.
https://wiki.gentoo.org/wiki/Efibootmgr
If you are multibooting frequently and getting into the UEFI boot menu to change the boot order or running efibootmgr is too much hassle, then a 3rd party boot manager will be useful. Your choice of GRUB, rEFInd, systemd-boot,
syslinux, EFI executable image will be installed and loaded/run by the UEFI firwmare from the ESP, with which in turn you will select and load your desired OS.
On Tue, 31 Aug 2021 14:21:35 -0400, Rich Freeman wrote:
If you are multibooting frequently and getting into the UEFI boot
menu to change the boot order or running efibootmgr is too much
hassle, then a 3rd party boot manager will be useful. Your choice of GRUB, rEFInd, systemd-boot, syslinux, EFI executable image will be installed and loaded/run by the UEFI firwmare from the ESP, with
which in turn you will select and load your desired OS.
So, which (if any) of these options supports either:
1. An EFI partition plus /boot on zfs (with no limitations on pool
config, ie it can be a root pool).
2. An EFI partition that contains everything.
If I want to use grub+EFI with a zfs root it sounds like I'd need TWO
boot partitions - an EFI partition (FAT32), and a /boot partition (anything, but if ZFS it needs to have controlled features). That
seems even more messy than what I'm doing now.
systemd-boot and refind both support everything on EFI. I am pretty sure
GRUB does too, but I have no reason to use GRUB with EFI. My setup on
this box is /boot on FAT32 and / (and everything else) on btrfs. I've
also used the same setup with ZFS.
Please beware, I have not used zfs to date, only btrfs, so the above merely reflects my understanding rather than in depth experience of the difficulty in
managing such a setup.
If you are multibooting frequently and getting into the UEFI boot
menu to change the boot order or running efibootmgr is too much
hassle, then a 3rd party boot manager will be useful. Your choice of
GRUB, rEFInd, systemd-boot, syslinux, EFI executable image will be installed and loaded/run by the UEFI firwmare from the ESP, with
which in turn you will select and load your desired OS.
So, which (if any) of these options supports either:
1. An EFI partition plus /boot on zfs (with no limitations on pool
config, ie it can be a root pool).
2. An EFI partition that contains everything.
If I want to use grub+EFI with a zfs root it sounds like I'd need TWO
boot partitions - an EFI partition (FAT32), and a /boot partition
(anything, but if ZFS it needs to have controlled features). That
seems even more messy than what I'm doing now.
Is there some particular need to resize boot? Is there even a need
to have a pool for it? It seems to me having a stand alone simple
VFAT partition for / boot which doubles up as ESP with a /boot/EFI subdirectory will do the job, although I understand not all use cases
are as simplistic as what I envisage here.
On Tue, Aug 31, 2021 at 3:44 PM Michael <confabulate@kintzios.com> wrote:
Please beware, I have not used zfs to date, only btrfs, so the above
merely
reflects my understanding rather than in depth experience of the
difficulty in managing such a setup.
To save you digging through the thread, the issue with zfs is that it
adds new features over time, and grub isn't necessarily compatible
with all of them. You can control which features are enabled on-disk
for compatibility, but grub doesn't do a great job documenting which
features are/aren't supported in any particular version. So, it is a
bit of a guessing game.
It has been pointed out that there are various guides online, but:
1. They don't all say the exact same thing.
2. They aren't official upstream docs.
3. They rarely specify what version of grub they're talking about.
The typical solution is to either use very conservative settings for
your root partition (which isn't ideal from a zfs standpoint), or have
a separate /boot pool which means that you don't have to encumber the
rest of the system with whatever grub's limitations might be. Then
you just never update that partition and it shouldn't break. That
basically is no different than just having /boot on ext4 or vfat or
whatever. With this solution you also can't just freely resize /boot
the way you could if it were part of the pool.
systemd-boot and refind both support everything on EFI. I am pretty
sure GRUB does too, but I have no reason to use GRUB with EFI. My
setup on this box is /boot on FAT32 and / (and everything else) on
btrfs. I've also used the same setup with ZFS.
Any boot option on a UEFI MoBo requires an 'EFI System Partition'
(ESP), formatted as VFAT. The UEFI firmware boot loader will
list/load/run any *.efi software stored in the ESP compatible with the
UEFI API, whether this is a boot loader, a kernel with an EFI stub, or
some .efi diagnostic application.
As long as your boot loader of choice, or kernel image and any initrd contains the requisite fs drivers, there will be no problem mounting
and accessing whatever root fs needs to be accessed.
GRUB contains a number of ZFS modules to do this job (zfscrypt.mod, zfsinfo.mod, zfs.mod) - not sure about the other boot managers.
Typical GRUB installations have /boot/efi mounted on the ESP, with the grubx64.efi image on it, while the rest of the files, vmlinuz symlinks,
etc. are on the root partition.
Please beware, I have not used zfs to date, only btrfs, so the above
merely reflects my understanding rather than in depth experience of the difficulty in managing such a setup.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 81:20:56 |
Calls: | 6,658 |
Calls today: | 4 |
Files: | 12,203 |
Messages: | 5,333,315 |
Posted today: | 1 |