• Questions about ZFS pool dataset naming conventions.

    From Grant Taylor@21:1/5 to All on Tue May 12 18:10:47 2020
    Since I got such a quick response to my last question, I figured I'd ask another one that's been bugging me.

    1) Why do ZFS pools use <pool name>/ROOT/solaris as their install
    point? Why not <pool name>/solaris?

    I get wanting something to make it easy to separate other datasets in
    the pool.

    2) Why capital ROOT instead of lowercase root?

    Is it possibly caps to make it more annoying to type and thus less
    likely to be accidentally typed?

    3) Do the same answers apply to <pool name>/BOOT/solaris?

    4) Is there any historical significance to "rpool" / "bpool" / "tank"
    (case insensitive)?

    Thank you for helping this n00b learn another thing.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From cindy.swearingen@gmail.com@21:1/5 to Grant Taylor on Wed May 13 07:10:48 2020
    Hi Grant,

    An important point is that the rpool, boot, OS components should be left alone and any non-rpool, boot, OS data should go into a data pool and not the rpool. I wouldn't spend too much time on the rpool components and how its structured. By keeping rpool
    and non-rpool data separate, you can apply file system properties to more easily match your data and rpool stays small and mostly static and also easier to recover. Adding non-rpool data to an rpool is not supported.

    A few comments below.

    Solaris 11.4 ZFS Admin Guide is a good reference:

    https://docs.oracle.com/cd/E37838_01/html/E61017/index.html

    Thanks, Cindy



    On Tuesday, May 12, 2020 at 6:10:47 PM UTC-6, Grant Taylor wrote:
    Since I got such a quick response to my last question, I figured I'd ask another one that's been bugging me.

    1) Why do ZFS pools use <pool name>/ROOT/solaris as their install
    point? Why not <pool name>/solaris?
    ROOT is just a mount-point structure. The lowercase root is the default home directory for the root (superuser) user.

    I get wanting something to make it easy to separate other datasets in
    the pool.

    See above. Create a separate pool for your data pool.

    2) Why capital ROOT instead of lowercase root?

    Again, just a mount point structure and also Solaris 11 has a lowercase root directory.

    Is it possibly caps to make it more annoying to type and thus less
    likely to be accidentally typed?

    I agree using caps is not very UNIX like.

    3) Do the same answers apply to <pool name>/BOOT/solaris?

    I see a lowercase boot/solaris directory.



    4) Is there any historical significance to "rpool" / "bpool" / "tank"
    (case insensitive)?

    rpool is for root pool, bpool looks like boot pool. The use of tank for a data pool name comes from the Matrix movie series. The ZFS eng team was keen on this movie series when ZFS was developed.

    Thank you for helping this n00b learn another thing.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John D Groenveld@21:1/5 to cindy.swearingen@gmail.com on Wed May 13 19:56:53 2020
    In article <54d26a19-c02f-4312-96e3-abd26a777b5b@googlegroups.com>,
    <cindy.swearingen@gmail.com> wrote:
    ROOT is just a mount-point structure. The lowercase root is the default
    home directory for the root (superuser) user.

    # beadm create foo
    # zfs list -r rpool

    I agree using caps is not very UNIX like.

    IMO as effective as using leading underscore to mark a private
    interface or variable.

    rpool is for root pool, bpool looks like boot pool. The use of tank for
    a data pool name comes from the Matrix movie series. The ZFS eng team
    was keen on this movie series when ZFS was developed.

    Jeff Bonwick explains the Matrix references in the docs in the
    Birth of ZFS talk:: <URL:https://openzfs.org/wiki/OpenZFS_Developer_Summit_2015>

    John
    groenveld@acm.org

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to John D Groenveld on Wed May 13 23:10:28 2020
    On 5/13/20 1:56 PM, John D Groenveld wrote:
    IMO as effective as using leading underscore to mark a private
    interface or variable.

    Agreed.

    About the only thing it will likely do is keep someone from typing it accidentally. They will have to want to type it.

    Jeff Bonwick explains the Matrix references in the docs in the Birth of
    ZFS talk:: <URL:https://openzfs.org/wiki/OpenZFS_Developer_Summit_2015>

    Thank you for the link. I watched Jeff's talk and a few others. I'll
    watch more as time permits.

    Thank you for your reply John.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to cindy.swearingen@gmail.com on Wed May 13 23:08:21 2020
    On 5/13/20 8:10 AM, cindy.swearingen@gmail.com wrote:
    Hi Grant,

    Hi Cindy,

    An important point is that the rpool, boot, OS components should be
    left alone and any non-rpool, boot, OS data should go into a data
    pool and not the rpool.

    I hear you. I think I understand what you're saying, and why you say
    it. (I've said quite similar things about volume groups for AIX and
    Linux for 15+ years.)

    But, I think ZFS is starting to make it into a space where doing that is
    not an option.

    I.e. I have a system (glorified bastion host) that is a 1U box with 4 ×
    3½" drives that I'm rebuilding. The hardware RAID-6 lost 3 drives
    during the current WFH climate. I've rebuilt it with new drives as a
    temporary measure. But, I'd like to rebuild it again in the near future
    as a 4 way ZFS /mirror/. (I really don't want to have this problem
    again.) I don't need space. I need stability and longevity. I'd like
    the pool to outlast the OS that's installed on the machine.

    I'd really like to build a pair of them and have each machine do a zfs
    send to each other's pools. That way, even if I loose four hard drives,
    I would still have the files on the other machine.

    Aside: The drives are multi TB for an OS install that's (considerably)
    less than 10 GB.

    I could partition the drives and have bpool and an rpool on them. (I
    had originally thought about a 4 way mirror for bpool and RAIDZ2 for
    rpool. But decided that everything could easily fit in the 4 way
    mirror.) The reason for having rpool separate was that GRUB doesn't
    support booting from any form of Z-RAID more complex than mirroring.

    So, I guess that I could still partition the drives and have rpool dpool
    on the same drives. But that seems a little silly to me for my
    particular use case.

    So, I'm in a situation that really lends itself to having OS specific
    files and DATA (non-OS specific) file in the same pool. The hardware
    doesn't lend itself to attaching more drives. Nor do I think I would do
    so if it did. I simply don't see the need. Not when each system's
    rpool can hold the necessary data from the other system.

    Please correct me if I'm wrong about any of this.

    I wouldn't spend too much time on the rpool components and how its structured.

    I have a nasty habit of asking that pesky "why" question, particularly
    in the context of why something was done the way that it was.

    By keeping rpool and non-rpool data separate, you can apply file system properties to more easily match your data and rpool stays small and
    mostly static and also easier to recover.

    Having rpool and dpool isn't really an option for this use case.

    Adding non-rpool data to an rpool is not supported.

    I'm taking that to mean that Oracle (et al.) won't support me as in hold
    my hand if I have non-OS specific files in the rpool.

    Something that's *SIGNIFICANTLY* different than it won't work.

    All the crazy things that you could do with old Cisco gear vs what TAC would(n't) ""support you do comes to mind.

    A few comments below.

    Solaris 11.4 ZFS Admin Guide is a good reference:

    https://docs.oracle.com/cd/E37838_01/html/E61017/index.html

    ACK

    Thanks, Cindy

    Thank you Cindy.

    ROOT is just a mount-point structure. The lowercase root is the
    default home directory for the root (superuser) user.

    I get the difference between "ROOT" as the base for the system and
    "root" as the root user's home directory. To me, those are two
    different things.

    That doesn't give any indication why "ROOT" as the base for the system
    is capital.

    Or are you saying that the capital was done to differentiate the two?

    See above. Create a separate pool for your data pool.

    See above. That's not always a viable option.

    Again, just a mount point structure and also Solaris 11 has a lowercase
    root directory.

    Understood.

    I'm trying to understand the history and motivation behind it being
    named "ROOT", and specifically why it's capitals.

    I agree using caps is not very UNIX like.

    There are plenty of commands that have options that use capital letters.
    I believe there are even some commands that have capitals in their
    name. (X11 related things come to mind.)

    I see a lowercase boot/solaris directory.

    Okay.

    I wonder if the case difference is version related.

    rpool is for root pool, bpool looks like boot pool. The use of tank
    for a data pool name comes from the Matrix movie series. The ZFS eng
    team was keen on this movie series when ZFS was developed.

    ACK

    Thank you for your reply Cindy.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott@21:1/5 to Grant Taylor on Thu May 14 05:24:15 2020
    On Wednesday, May 13, 2020 at 10:08:21 PM UTC-7, Grant Taylor wrote:
    I.e. I have a system (glorified bastion host) that is a 1U box with 4 × 3½" drives that I'm rebuilding. The hardware RAID-6 lost 3 drives
    during the current WFH climate. I've rebuilt it with new drives as a temporary measure. But, I'd like to rebuild it again in the near future
    as a 4 way ZFS /mirror/. (I really don't want to have this problem
    again.) I don't need space. I need stability and longevity. I'd like
    the pool to outlast the OS that's installed on the machine.

    IMO you don't want to use zpools on a host that has a hardware RAID-anything host bus adapter. My experience is with HPE hardware.

    If you want to use ZFS use it with a simple HBA that will present the raw HDD to you without any interference, translation, or buffering.

    Regards, Scott

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Scott on Thu May 14 09:28:41 2020
    On 5/14/20 6:24 AM, Scott wrote:
    IMO you don't want to use zpools on a host that has a hardware
    RAID-anything host bus adapter. My experience is with HPE hardware.

    I agree in an ideal world, that it is best to have dumb HBAs between ZFS
    and the disks. But that's not always an option.

    Nor does hardware RAID give me nearly all the options that ZFS does.

    If you want to use ZFS use it with a simple HBA that will present
    the raw HDD to you without any interference, translation, or buffering.

    That is not an option in all cases. Nor do I think that sub-optimal
    hardware options should exclude me from using ZFS.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From cindy.swearingen@gmail.com@21:1/5 to Grant Taylor on Thu May 14 11:14:34 2020
    Hi Grant,

    See my comments below. I agree with Scott's comments not to use any kind of hardware RAID with ZFS root pool. Its a royal pain to replace a failed device and undoes a lot of goodness of ZFS root pool brings to device management.

    Thanks, Cindy

    On Wednesday, May 13, 2020 at 11:08:21 PM UTC-6, Grant Taylor wrote:
    On 5/13/20 8:10 AM, cindy.swearingen@gmail.com wrote:
    Hi Grant,

    Hi Cindy,

    An important point is that the rpool, boot, OS components should be
    left alone and any non-rpool, boot, OS data should go into a data
    pool and not the rpool.

    I hear you. I think I understand what you're saying, and why you say
    it. (I've said quite similar things about volume groups for AIX and
    Linux for 15+ years.)

    But, I think ZFS is starting to make it into a space where doing that is
    not an option.

    I.e. I have a system (glorified bastion host) that is a 1U box with 4 × 3½" drives that I'm rebuilding. The hardware RAID-6 lost 3 drives
    during the current WFH climate. I've rebuilt it with new drives as a temporary measure. But, I'd like to rebuild it again in the near future
    as a 4 way ZFS /mirror/. (I really don't want to have this problem
    again.) I don't need space. I need stability and longevity. I'd like
    the pool to outlast the OS that's installed on the machine.

    I'd really like to build a pair of them and have each machine do a zfs
    send to each other's pools. That way, even if I loose four hard drives,
    I would still have the files on the other machine.

    Aside: The drives are multi TB for an OS install that's (considerably)
    less than 10 GB.

    I could partition the drives and have bpool and an rpool on them. (I
    had originally thought about a 4 way mirror for bpool and RAIDZ2 for
    rpool. But decided that everything could easily fit in the 4 way
    mirror.) The reason for having rpool separate was that GRUB doesn't
    support booting from any form of Z-RAID more complex than mirroring.

    So, I guess that I could still partition the drives and have rpool dpool
    on the same drives. But that seems a little silly to me for my
    particular use case.

    So, I'm in a situation that really lends itself to having OS specific
    files and DATA (non-OS specific) file in the same pool. The hardware doesn't lend itself to attaching more drives. Nor do I think I would do
    so if it did. I simply don't see the need. Not when each system's
    rpool can hold the necessary data from the other system.

    Please correct me if I'm wrong about any of this.

    You are not incorrect and if like me, will learn by experience.

    I think you are saying that a 4-way mirrored root pool is safer than a 2-way rpool and 2-way data pool. Just be clear, the root pool only supports a 4-way mirror and not a 2x2 mirror.

    If you are going to create a file system in rpool then do not use existing ROOT or VARSHARE components. Create something like rpool/data and then run tests like create a BE, update it and roll back.

    MHO is that disks are cheap and if continued operation is your number one priority, follow the supported path because this is untested.

    Also review the best practices section in the ZFS Admin Guide.

    I wouldn't spend too much time on the rpool components and how its structured.

    I have a nasty habit of asking that pesky "why" question, particularly
    in the context of why something was done the way that it was.

    By keeping rpool and non-rpool data separate, you can apply file system properties to more easily match your data and rpool stays small and
    mostly static and also easier to recover.

    Having rpool and dpool isn't really an option for this use case.

    Adding non-rpool data to an rpool is not supported.

    I'm taking that to mean that Oracle (et al.) won't support me as in hold
    my hand if I have non-OS specific files in the rpool.

    Something that's *SIGNIFICANTLY* different than it won't work.

    All the crazy things that you could do with old Cisco gear vs what TAC would(n't) ""support you do comes to mind.

    A few comments below.

    Solaris 11.4 ZFS Admin Guide is a good reference:

    https://docs.oracle.com/cd/E37838_01/html/E61017/index.html

    ACK

    Thanks, Cindy

    Thank you Cindy.

    ROOT is just a mount-point structure. The lowercase root is the
    default home directory for the root (superuser) user.

    I get the difference between "ROOT" as the base for the system and
    "root" as the root user's home directory. To me, those are two
    different things.

    That doesn't give any indication why "ROOT" as the base for the system
    is capital.

    Or are you saying that the capital was done to differentiate the two?

    Yes, that is what I remember but memories from 2008, when ZFS boot was developed, are dim.

    See above. Create a separate pool for your data pool.

    See above. That's not always a viable option.

    Again, just a mount point structure and also Solaris 11 has a lowercase root directory.

    Understood.

    I'm trying to understand the history and motivation behind it being
    named "ROOT", and specifically why it's capitals.

    I agree using caps is not very UNIX like.

    There are plenty of commands that have options that use capital letters.
    I believe there are even some commands that have capitals in their
    name. (X11 related things come to mind.)

    I see a lowercase boot/solaris directory.

    Okay.

    I wonder if the case difference is version related.

    rpool is for root pool, bpool looks like boot pool. The use of tank
    for a data pool name comes from the Matrix movie series. The ZFS eng
    team was keen on this movie series when ZFS was developed.

    ACK

    Thank you for your reply Cindy.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gary R. Schmidt@21:1/5 to Grant Taylor on Fri May 15 12:51:55 2020
    On 15/05/2020 01:28, Grant Taylor wrote:
    On 5/14/20 6:24 AM, Scott wrote:
    IMO you don't want to use zpools on a host that has a hardware
    RAID-anything host bus adapter.  My experience is with HPE hardware.

    I agree in an ideal world, that it is best to have dumb HBAs between ZFS
    and the disks.  But that's not always an option.

    Nor does hardware RAID give me nearly all the options that ZFS does.

    If you want to use ZFS use it with a simple HBA that will present the
    raw HDD to you without any interference, translation, or buffering.

    That is not an option in all cases.  Nor do I think that sub-optimal hardware options should exclude me from using ZFS.

    Many RAID adapters can be configured as having each disk as a RAID0
    volume, which you can then use as disks for a ZFS setup.

    I know the LSI MegaRAID controller can be set up that way, I've done to
    a few RX500's and their relatives over the years. (Not all to run
    Solaris or ZFS, but to get them to do what we wanted.)

    Cheers,
    Gary B-)

    --
    Waiting for a new signature to suggest itself...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to cindy.swearingen@gmail.com on Mon May 18 22:47:54 2020
    On 5/14/20 12:14 PM, cindy.swearingen@gmail.com wrote:
    Hi Grant,

    Hi Cindy,

    See my comments below. I agree with Scott's comments not to use any
    kind of hardware RAID with ZFS root pool. Its a royal pain to replace
    a failed device and undoes a lot of goodness of ZFS root pool brings
    to device management.

    Understood. (See my reply to Gary for more details.)

    I think you are saying that a 4-way mirrored root pool is safer than
    a 2-way rpool and 2-way data pool.

    Yes. In many different ways.

    Just be clear, the root pool only supports a 4-way mirror and not a
    2x2 mirror.

    Yep, the zpool would be the typical:

    <pool>
    <mirror>
    <disk 1>
    <disk 2>
    <disk 3>
    <disk 4>

    If you are going to create a file system in rpool then do not use
    existing ROOT or VARSHARE components. Create something like rpool/data
    and then run tests like create a BE, update it and roll back.

    I was planing on staying out of <pool>/ROOT and any other system data
    sets. I'd create a new one, likely based on the name of the other
    system, possibly under "backup" or something like that.

    <pool>/backups/<other system name>

    I /might/ put this under data.

    <pool>/data/backups/<other system name>

    MHO is that disks are cheap and if continued operation is your number
    one priority, follow the supported path because this is untested.

    Supported path per say isn't really an option for this. (See my reply
    to Gary.) I'm making the best of what I have while trying to follow the Solaris & ZFS spirit.

    Also review the best practices section in the ZFS Admin Guide.

    Understood. Will do.

    Yes, that is what I remember but memories from 2008, when ZFS boot
    was developed, are dim.

    ACK



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Gary R. Schmidt on Mon May 18 22:41:56 2020
    On 5/14/20 8:51 PM, Gary R. Schmidt wrote:
    Many RAID adapters can be configured as having each disk as a RAID0
    volume, which you can then use as disks for a ZFS setup.

    Yep.

    I know the LSI MegaRAID controller can be set up that way, I've done to
    a few RX500's and their relatives over the years.  (Not all to run
    Solaris or ZFS, but to get them to do what we wanted.)

    I'm using Dell PowerEdge RAID Controllers because that's what's in the re-purposed systems and I can't (for many reasons) change the hardware.

    At least with some LSI cards, you can change them from Integrated RAID
    (a.k.a. IR) mode to Initiator / Target (a.k.a. IT) mode. That's my
    preference if I can do so.

    Sadly the systems that I'm working with at the moment don't support that.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Grant Taylor on Tue May 19 16:51:22 2020
    On 19/05/2020 16:41, Grant Taylor wrote:
    On 5/14/20 8:51 PM, Gary R. Schmidt wrote:
    Many RAID adapters can be configured as having each disk as a RAID0
    volume, which you can then use as disks for a ZFS setup.

    Yep.

    I know the LSI MegaRAID controller can be set up that way, I've done to
    a few RX500's and their relatives over the years.  (Not all to run
    Solaris or ZFS, but to get them to do what we wanted.)

    I'm using Dell PowerEdge RAID Controllers because that's what's in the re-purposed systems and I can't (for many reasons) change the hardware.

    Which Dell controller model are you stuck with? Most do have the option
    of having each disk as a RAID0 volume or simply as a raw drive.

    --
    Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Ian Collins on Mon May 18 23:35:24 2020
    On 5/18/20 10:51 PM, Ian Collins wrote:
    Which Dell controller model are you stuck with?  Most do have the option
    of having each disk as a RAID0 volume or simply as a raw drive.

    It looks like a PERC H710P.

    Yes, I can create a bunch of independent single disk RAID 0 volumes. I
    don't think I've been able to geet pass-through working in a while.
    I'll take another swing at that when I'm doing the physical work.

    At the moment, I'm trying to learn some history about ZFS and why things
    were done the way that they were.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Grant Taylor on Tue May 19 17:48:31 2020
    On 19/05/2020 17:35, Grant Taylor wrote:
    On 5/18/20 10:51 PM, Ian Collins wrote:
    Which Dell controller model are you stuck with?  Most do have the option
    of having each disk as a RAID0 volume or simply as a raw drive.

    It looks like a PERC H710P.

    Yes, I can create a bunch of independent single disk RAID 0 volumes. I
    don't think I've been able to geet pass-through working in a while.
    I'll take another swing at that when I'm doing the physical work.

    https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/Enable-passthrough-on-H710P/td-p/4365545

    At the moment, I'm trying to learn some history about ZFS and why things
    were done the way that they were.

    A worthy quest :)

    --
    Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gary R. Schmidt@21:1/5 to Grant Taylor on Tue May 19 15:49:25 2020
    On 19/05/2020 14:41, Grant Taylor wrote:
    On 5/14/20 8:51 PM, Gary R. Schmidt wrote:
    Many RAID adapters can be configured as having each disk as a RAID0
    volume, which you can then use as disks for a ZFS setup.

    Yep.

    I know the LSI MegaRAID controller can be set up that way, I've done
    to a few RX500's and their relatives over the years.  (Not all to run
    Solaris or ZFS, but to get them to do what we wanted.)

    I'm using Dell PowerEdge RAID Controllers because that's what's in the re-purposed systems and I can't (for many reasons) change the hardware.

    At least with some LSI cards, you can change them from Integrated RAID (a.k.a. IR) mode to Initiator / Target (a.k.a. IT) mode.  That's my preference if I can do so.

    Sadly the systems that I'm working with at the moment don't support that.

    Most of the PowerEdge RAID adaptors that have crossed my path can be
    flashed between IR and IT modes.

    It's not a job for the faint-hearted (IIRC it requires a Windows box),
    but usually it can be done.

    Cheers,
    Gary B-)

    --
    Waiting for a new signature to suggest itself...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Casper H.S. Dik@21:1/5 to Grant Taylor on Tue May 19 10:09:58 2020
    Grant Taylor <gtaylor@tnetconsulting.net> writes:

    Since I got such a quick response to my last question, I figured I'd ask >another one that's been bugging me.

    1) Why do ZFS pools use <pool name>/ROOT/solaris as their install
    point? Why not <pool name>/solaris?

    Under root, all the other boot environment exist.

    So instead of having:

    rpool/11.3-sru3
    rpool/11.3-sru5
    rpool/11.4-FCS

    we gather them under root:

    rpool/ROOT/11.3-sru3
    rpool/ROOT/11.3-sru5
    rpool/ROOT/11.4-FCS

    So we use only one reserved name (well, two: ROOT and VARSHARE) and
    under ROOT we have an own name space for boot environments.

    I get wanting something to make it easy to separate other datasets in
    the pool.

    2) Why capital ROOT instead of lowercase root?

    Because most customers use lowercase file ssytems; we use ROOT and
    VARSHARE so they where much less likely to conflict with customers.

    Is it possibly caps to make it more annoying to type and thus less
    likely to be accidentally typed?

    No. In ZFS you required to give names to file systems; in UFS you'd
    have "/" and it has no name.

    3) Do the same answers apply to <pool name>/BOOT/solaris?

    What thype of system is that?

    4) Is there any historical significance to "rpool" / "bpool" / "tank"
    (case insensitive)?


    rpool == root pool
    bpool == boot pool (I think this is specific to specific larger systems
    which have no internal storage, if I remember correctly)

    "tank" is what is generally used in examples but in reality it is used a
    lot; it is NOT reserved and has no specific meaning for Solaris.

    Casper

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John D Groenveld@21:1/5 to Gary R. Schmidt on Tue May 19 11:27:26 2020
    In article <5k8cpg-39d.ln1@paranoia.mcleod-schmidt.id.au>,
    Gary R. Schmidt <grschmidt@acm.org> wrote:
    It's not a job for the faint-hearted (IIRC it requires a Windows box),
    but usually it can be done.

    It can be done from FreeDOS and probably Unbreakable Linux.
    But I would just tell Dell to swap in the HBA part for the PERC.
    John
    groenveld@acm.org

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Gary R. Schmidt on Wed May 20 09:04:29 2020
    On 5/18/20 11:49 PM, Gary R. Schmidt wrote:
    Most of the PowerEdge RAID adaptors that have crossed my path can be
    flashed between IR and IT modes.

    That's good to know.

    It's not a job for the faint-hearted (IIRC it requires a Windows box),
    but usually it can be done.

    I've done it on a few non-PERC controllers. Things either worked, or
    reverted back to I.R. mode. I've not bricked anything yet. Thankfully.

    However, this is a future endeavor.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Ian Collins on Wed May 20 09:02:28 2020
    On 5/18/20 11:48 PM, Ian Collins wrote:
    https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/Enable-passthrough-on-H710P/td-p/4365545

    Hum. I'll check that out.

    Not that I'm going to make any more changes to this system remotely,
    during extended WFH, without functional remote console.

    But that's good information for future.

    A worthy quest :)

    I thought so.

    Thankfully I can benefit from the knowledgeable people here in comp.unix.solaris. :-)



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to John D Groenveld on Wed May 20 09:05:28 2020
    On 5/19/20 5:27 AM, John D Groenveld wrote:
    It can be done from FreeDOS and probably Unbreakable Linux.

    Good to know.

    But I would just tell Dell to swap in the HBA part for the PERC.

    As previously stated, this is re-use of existing hardware. Engaging
    Dell to alter it or send me parts for me to alter it, is not an option
    this time.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Casper H.S. Dik on Wed May 20 09:14:32 2020
    On 5/19/20 4:09 AM, Casper H.S. Dik wrote:
    Under root, all the other boot environment exist.

    So instead of having:

    rpool/11.3-sru3
    rpool/11.3-sru5
    rpool/11.4-FCS

    we gather them under root:

    rpool/ROOT/11.3-sru3
    rpool/ROOT/11.3-sru5
    rpool/ROOT/11.4-FCS

    So we use only one reserved name (well, two: ROOT and VARSHARE) and
    under ROOT we have an own name space for boot environments.

    Thank you for clarification.

    Because most customers use lowercase file ssytems; we use ROOT and
    VARSHARE so they where much less likely to conflict with customers.

    Understood.

    No. In ZFS you required to give names to file systems; in UFS you'd
    have "/" and it has no name.

    I get the requirement to give the file system a name. But I don't see
    what difference that has to do with "ROOT" vs "root".

    What thype of system is that?

    I ran across it in multiple documents, many of which on Oracle's
    website. I don't have specifics at hand.

    rpool == root pool
    bpool == boot pool (I think this is specific to specific larger systems
    which have no internal storage, if I remember correctly)

    My understanding is that bpool comes into play when the rpool can't be
    booted from for one reason or another. rpool's structure comes to mind,
    as in RAIDZ{1,2,3} / stripe.

    "tank" is what is generally used in examples but in reality it is used a
    lot; it is NOT reserved and has no specific meaning for Solaris.

    Thank you for clarifying that tank / TANK is not reserved.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)