Since I got such a quick response to my last question, I figured I'd ask another one that's been bugging me.ROOT is just a mount-point structure. The lowercase root is the default home directory for the root (superuser) user.
1) Why do ZFS pools use <pool name>/ROOT/solaris as their install
point? Why not <pool name>/solaris?
I get wanting something to make it easy to separate other datasets in
the pool.
2) Why capital ROOT instead of lowercase root?
Is it possibly caps to make it more annoying to type and thus less
likely to be accidentally typed?
3) Do the same answers apply to <pool name>/BOOT/solaris?
4) Is there any historical significance to "rpool" / "bpool" / "tank"
(case insensitive)?
Thank you for helping this n00b learn another thing.
--
Grant. . . .
unix || die
ROOT is just a mount-point structure. The lowercase root is the default
home directory for the root (superuser) user.
I agree using caps is not very UNIX like.
rpool is for root pool, bpool looks like boot pool. The use of tank for
a data pool name comes from the Matrix movie series. The ZFS eng team
was keen on this movie series when ZFS was developed.
IMO as effective as using leading underscore to mark a private
interface or variable.
Jeff Bonwick explains the Matrix references in the docs in the Birth of
ZFS talk:: <URL:https://openzfs.org/wiki/OpenZFS_Developer_Summit_2015>
Hi Grant,
An important point is that the rpool, boot, OS components should be
left alone and any non-rpool, boot, OS data should go into a data
pool and not the rpool.
I wouldn't spend too much time on the rpool components and how its structured.
By keeping rpool and non-rpool data separate, you can apply file system properties to more easily match your data and rpool stays small and
mostly static and also easier to recover.
Adding non-rpool data to an rpool is not supported.
A few comments below.
Solaris 11.4 ZFS Admin Guide is a good reference:
https://docs.oracle.com/cd/E37838_01/html/E61017/index.html
Thanks, Cindy
ROOT is just a mount-point structure. The lowercase root is the
default home directory for the root (superuser) user.
See above. Create a separate pool for your data pool.
Again, just a mount point structure and also Solaris 11 has a lowercase
root directory.
I agree using caps is not very UNIX like.
I see a lowercase boot/solaris directory.
rpool is for root pool, bpool looks like boot pool. The use of tank
for a data pool name comes from the Matrix movie series. The ZFS eng
team was keen on this movie series when ZFS was developed.
I.e. I have a system (glorified bastion host) that is a 1U box with 4 × 3½" drives that I'm rebuilding. The hardware RAID-6 lost 3 drives
during the current WFH climate. I've rebuilt it with new drives as a temporary measure. But, I'd like to rebuild it again in the near future
as a 4 way ZFS /mirror/. (I really don't want to have this problem
again.) I don't need space. I need stability and longevity. I'd like
the pool to outlast the OS that's installed on the machine.
IMO you don't want to use zpools on a host that has a hardware
RAID-anything host bus adapter. My experience is with HPE hardware.
If you want to use ZFS use it with a simple HBA that will present
the raw HDD to you without any interference, translation, or buffering.
On 5/13/20 8:10 AM, cindy.swearingen@gmail.com wrote:
Hi Grant,
Hi Cindy,
An important point is that the rpool, boot, OS components should be
left alone and any non-rpool, boot, OS data should go into a data
pool and not the rpool.
I hear you. I think I understand what you're saying, and why you say
it. (I've said quite similar things about volume groups for AIX and
Linux for 15+ years.)
But, I think ZFS is starting to make it into a space where doing that is
not an option.
I.e. I have a system (glorified bastion host) that is a 1U box with 4 × 3½" drives that I'm rebuilding. The hardware RAID-6 lost 3 drives
during the current WFH climate. I've rebuilt it with new drives as a temporary measure. But, I'd like to rebuild it again in the near future
as a 4 way ZFS /mirror/. (I really don't want to have this problem
again.) I don't need space. I need stability and longevity. I'd like
the pool to outlast the OS that's installed on the machine.
I'd really like to build a pair of them and have each machine do a zfs
send to each other's pools. That way, even if I loose four hard drives,
I would still have the files on the other machine.
Aside: The drives are multi TB for an OS install that's (considerably)
less than 10 GB.
I could partition the drives and have bpool and an rpool on them. (I
had originally thought about a 4 way mirror for bpool and RAIDZ2 for
rpool. But decided that everything could easily fit in the 4 way
mirror.) The reason for having rpool separate was that GRUB doesn't
support booting from any form of Z-RAID more complex than mirroring.
So, I guess that I could still partition the drives and have rpool dpool
on the same drives. But that seems a little silly to me for my
particular use case.
So, I'm in a situation that really lends itself to having OS specific
files and DATA (non-OS specific) file in the same pool. The hardware doesn't lend itself to attaching more drives. Nor do I think I would do
so if it did. I simply don't see the need. Not when each system's
rpool can hold the necessary data from the other system.
Please correct me if I'm wrong about any of this.
I wouldn't spend too much time on the rpool components and how its structured.
I have a nasty habit of asking that pesky "why" question, particularly
in the context of why something was done the way that it was.
By keeping rpool and non-rpool data separate, you can apply file system properties to more easily match your data and rpool stays small and
mostly static and also easier to recover.
Having rpool and dpool isn't really an option for this use case.
Adding non-rpool data to an rpool is not supported.
I'm taking that to mean that Oracle (et al.) won't support me as in hold
my hand if I have non-OS specific files in the rpool.
Something that's *SIGNIFICANTLY* different than it won't work.
All the crazy things that you could do with old Cisco gear vs what TAC would(n't) ""support you do comes to mind.
A few comments below.
Solaris 11.4 ZFS Admin Guide is a good reference:
https://docs.oracle.com/cd/E37838_01/html/E61017/index.html
ACK
Thanks, Cindy
Thank you Cindy.
ROOT is just a mount-point structure. The lowercase root is the
default home directory for the root (superuser) user.
I get the difference between "ROOT" as the base for the system and
"root" as the root user's home directory. To me, those are two
different things.
That doesn't give any indication why "ROOT" as the base for the system
is capital.
Or are you saying that the capital was done to differentiate the two?
See above. Create a separate pool for your data pool.
See above. That's not always a viable option.
Again, just a mount point structure and also Solaris 11 has a lowercase root directory.
Understood.
I'm trying to understand the history and motivation behind it being
named "ROOT", and specifically why it's capitals.
I agree using caps is not very UNIX like.
There are plenty of commands that have options that use capital letters.
I believe there are even some commands that have capitals in their
name. (X11 related things come to mind.)
I see a lowercase boot/solaris directory.
Okay.
I wonder if the case difference is version related.
rpool is for root pool, bpool looks like boot pool. The use of tank
for a data pool name comes from the Matrix movie series. The ZFS eng
team was keen on this movie series when ZFS was developed.
ACK
Thank you for your reply Cindy.
--
Grant. . . .
unix || die
On 5/14/20 6:24 AM, Scott wrote:
IMO you don't want to use zpools on a host that has a hardware
RAID-anything host bus adapter. My experience is with HPE hardware.
I agree in an ideal world, that it is best to have dumb HBAs between ZFS
and the disks. But that's not always an option.
Nor does hardware RAID give me nearly all the options that ZFS does.
If you want to use ZFS use it with a simple HBA that will present the
raw HDD to you without any interference, translation, or buffering.
That is not an option in all cases. Nor do I think that sub-optimal hardware options should exclude me from using ZFS.
Hi Grant,
See my comments below. I agree with Scott's comments not to use any
kind of hardware RAID with ZFS root pool. Its a royal pain to replace
a failed device and undoes a lot of goodness of ZFS root pool brings
to device management.
I think you are saying that a 4-way mirrored root pool is safer than
a 2-way rpool and 2-way data pool.
Just be clear, the root pool only supports a 4-way mirror and not a
2x2 mirror.
If you are going to create a file system in rpool then do not use
existing ROOT or VARSHARE components. Create something like rpool/data
and then run tests like create a BE, update it and roll back.
MHO is that disks are cheap and if continued operation is your number
one priority, follow the supported path because this is untested.
Also review the best practices section in the ZFS Admin Guide.
Yes, that is what I remember but memories from 2008, when ZFS boot
was developed, are dim.
Many RAID adapters can be configured as having each disk as a RAID0
volume, which you can then use as disks for a ZFS setup.
I know the LSI MegaRAID controller can be set up that way, I've done to
a few RX500's and their relatives over the years. (Not all to run
Solaris or ZFS, but to get them to do what we wanted.)
On 5/14/20 8:51 PM, Gary R. Schmidt wrote:
Many RAID adapters can be configured as having each disk as a RAID0
volume, which you can then use as disks for a ZFS setup.
Yep.
I know the LSI MegaRAID controller can be set up that way, I've done to
a few RX500's and their relatives over the years. (Not all to run
Solaris or ZFS, but to get them to do what we wanted.)
I'm using Dell PowerEdge RAID Controllers because that's what's in the re-purposed systems and I can't (for many reasons) change the hardware.
Which Dell controller model are you stuck with? Most do have the option
of having each disk as a RAID0 volume or simply as a raw drive.
On 5/18/20 10:51 PM, Ian Collins wrote:
Which Dell controller model are you stuck with? Most do have the option
of having each disk as a RAID0 volume or simply as a raw drive.
It looks like a PERC H710P.
Yes, I can create a bunch of independent single disk RAID 0 volumes. I
don't think I've been able to geet pass-through working in a while.
I'll take another swing at that when I'm doing the physical work.
At the moment, I'm trying to learn some history about ZFS and why things
were done the way that they were.
On 5/14/20 8:51 PM, Gary R. Schmidt wrote:
Many RAID adapters can be configured as having each disk as a RAID0
volume, which you can then use as disks for a ZFS setup.
Yep.
I know the LSI MegaRAID controller can be set up that way, I've done
to a few RX500's and their relatives over the years. (Not all to run
Solaris or ZFS, but to get them to do what we wanted.)
I'm using Dell PowerEdge RAID Controllers because that's what's in the re-purposed systems and I can't (for many reasons) change the hardware.
At least with some LSI cards, you can change them from Integrated RAID (a.k.a. IR) mode to Initiator / Target (a.k.a. IT) mode. That's my preference if I can do so.
Sadly the systems that I'm working with at the moment don't support that.
Since I got such a quick response to my last question, I figured I'd ask >another one that's been bugging me.
1) Why do ZFS pools use <pool name>/ROOT/solaris as their install
point? Why not <pool name>/solaris?
I get wanting something to make it easy to separate other datasets in
the pool.
2) Why capital ROOT instead of lowercase root?
Is it possibly caps to make it more annoying to type and thus less
likely to be accidentally typed?
3) Do the same answers apply to <pool name>/BOOT/solaris?
4) Is there any historical significance to "rpool" / "bpool" / "tank"
(case insensitive)?
It's not a job for the faint-hearted (IIRC it requires a Windows box),
but usually it can be done.
Most of the PowerEdge RAID adaptors that have crossed my path can be
flashed between IR and IT modes.
It's not a job for the faint-hearted (IIRC it requires a Windows box),
but usually it can be done.
https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/Enable-passthrough-on-H710P/td-p/4365545
A worthy quest :)
It can be done from FreeDOS and probably Unbreakable Linux.
But I would just tell Dell to swap in the HBA part for the PERC.
Under root, all the other boot environment exist.
So instead of having:
rpool/11.3-sru3
rpool/11.3-sru5
rpool/11.4-FCS
we gather them under root:
rpool/ROOT/11.3-sru3
rpool/ROOT/11.3-sru5
rpool/ROOT/11.4-FCS
So we use only one reserved name (well, two: ROOT and VARSHARE) and
under ROOT we have an own name space for boot environments.
Because most customers use lowercase file ssytems; we use ROOT and
VARSHARE so they where much less likely to conflict with customers.
No. In ZFS you required to give names to file systems; in UFS you'd
have "/" and it has no name.
What thype of system is that?
rpool == root pool
bpool == boot pool (I think this is specific to specific larger systems
which have no internal storage, if I remember correctly)
"tank" is what is generally used in examples but in reality it is used a
lot; it is NOT reserved and has no specific meaning for Solaris.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 399 |
Nodes: | 16 (1 / 15) |
Uptime: | 103:50:50 |
Calls: | 8,365 |
Calls today: | 4 |
Files: | 13,165 |
Messages: | 5,898,197 |