• Sendmail server as Oracle KVM cluster guest

    From David Carvalho@21:1/5 to All on Thu Jan 6 06:29:56 2022
    Hi and happy new year.
    I manage a Sendmail server running in an Oracle Linux 6 (old hardware), which also manages some mailing lists. This server deals with about 5300 e-mails daily and has been very reliable throughout these years.

    Being concerned about the aging server and constant budget restrictions on this particular department, I don't predict to replace this server any time soon. However, eventually I could some used but reliable servers and virtualize them with Oracle KVM,
    as I have some experience with the previous oracle VM manager technology.

    I know that in a High Availability environment VMs should use shared storage to allow live migrations, etc.

    Not expecting Live Migrations, my question is if it is possible/recommended to run a sendmail VM with local storage, in such environment. If by any chance one of these servers running the Oracle KVM would fault, I guess it would be relatively easy to
    migrate it to another server, with a recent backup, expecting some lost e-mails from the previous hours.
    Does anyone run Sendmail this way or in a cluster?

    Thanks and regards.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to David Carvalho on Thu Jan 6 13:01:37 2022
    On 1/6/22 7:29 AM, David Carvalho wrote:
    Hi and happy new year.

    Hi,

    I manage a Sendmail server running in an Oracle Linux 6 (old hardware),

    Isn't Oracle Linux based on RHEL 6 / CentOS 6? I believe that's quite
    old software too. -- I think that I'd be more worried about the old
    software than I would be about the old hardware.

    which also manages some mailing lists.

    What does "mailing list" mean in this context? Sendmail alias /
    expansions? A mailing list manager, e.g. Mailman? Something else?

    This server deals with about 5300 e-mails daily and has been very
    reliable throughout these years.

    That seems like a relatively small, thus not difficult to handle, mail
    load. I've got a small VPS (1 vCPU / 2 GB RAM) that's running 20k ± 5k
    a day through Sendmail, ClamAV, SpamAssassin, and multiple other
    milters, all with no problem.

    Being concerned about the aging server and constant budget restrictions
    on this particular department, I don't predict to replace this
    server any time soon.

    Virtualizing that system shouldn't be difficult. That would at least
    address the /hardware/ aspect. The /software/ concerns still stand.

    However, eventually I could <acquire?> some used but reliable servers
    and virtualize them with Oracle KVM, as I have some experience with
    the previous oracle VM manager technology.

    Just about any virtualization technology should work. I'm not familiar
    with anything named "Oracle KVM". VirtualBox (from Oracle) or (Linux)
    KVM running on Oracle Linux come to mind. But Oracle does have other
    things that play in this space.

    I know that in a High Availability environment VMs should use shared
    storage to allow live migrations, etc.

    The hosting hypervisors should have access to the same back-end storage.
    The guest VMs tend to be considerably simpler and usually only see
    what they think is directly attached storage which is not shared with
    other VMs.

    There are other fancier things that you can do with many MTA, involving
    NFS and / or clustered file systems such that various things (mail
    queues, end user mail stores, etc.) are shared between systems. This is
    more complex.

    Not expecting Live Migrations,

    Migrations, live / hot or cold, are a hypervisor issue and not directly
    related to Sendmail running in a guest VM.

    my question is if it is possible / recommended to run a sendmail
    VM with local storage, in such environment.

    I've been running Sendmail in a VM (from a VPS provider) for myself as
    well as Sendmail in multiple different VMs (VMware) for a friend for
    more than a decade. Sendmail is none the wiser and perfectly happy to
    run in a guest VM.

    In some ways I think that running in a guest VM provides some hardware abstraction and allows you to move between different hardware with fewer problem / more simply. As a bonus, VMs tend to mean that getting to the console is more reliable. }:-)

    If by any chance one of these servers running the Oracle KVM would
    fault, I guess it would be relatively easy to migrate it to another
    server, with a recent backup, expecting some lost e-mails from the
    previous hours. Does anyone run Sendmail this way or in a cluster?

    I would expect that the hypervisor hosts would have shared storage of
    some form so that if a host dies, you can simply power on the guest VM
    on one of the surviving hosts. It then becomes tantamount to recovering
    from a power failure and the typical file system checks, file
    consistency / corruption issues, etc.

    If I didn't have the option of shared storage (of some sort), I'd
    probably tend to push the redundancy into the guest VM in that I would
    want it to have a disk from two different back end storage servers and
    rely on software based RAID etc. The idea being that normally the guest
    VM sees both of it's disks. In the event of a host failure, the guest
    VM would see half of it's disks.

    There are a number of ways to do this. It's a mixture of what
    technology is used (NFS, iSCSI, etc.) and what layer it's done at (host
    and / or guest).

    This may actually be a case where the redundancy support adds
    considerable bit of complexity. As such, you should seriously evaluate
    if the added complexity is worth it or not. I sort of suspect that
    adding the redundancy to / at the guest VM level is not worth the
    complexity.

    If I were to design a redundant stack (for many more messages than
    you're talking about) I'd think seriously about multiple VMs (to run the
    MTAs) with fairly simple local disk and NFS for shared storage. Have a redundant NFS infrastructure (this has it's own complications).
    Probably use redundant VMs for IMAP / POP3 / etc. accessing client email
    store on NFS. There would probably also be Kerberos and LDAP in the
    mix. -- This is probably considerable overkill for ~6k messages a day.

    Thanks and regards.

    You're welcome.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From J.O. Aho@21:1/5 to Grant Taylor on Thu Jan 6 23:11:13 2022
    On 06/01/2022 21.01, Grant Taylor wrote:
    On 1/6/22 7:29 AM, David Carvalho wrote:
    Hi and happy new year.

    Hi,

    I manage a Sendmail server running in an Oracle Linux 6 (old hardware),

    Isn't Oracle Linux based on RHEL 6 / CentOS 6?  I believe that's quite
    old software too.  --  I think that I'd be more worried about the old software than I would be about the old hardware.

    It has extended support to July 2024, so there is plenty of time to
    migrate to a newer version of another distribution.

    Running a VM can make it easier to experiment with dist upgrade.


    which also manages some mailing lists.

    What does "mailing list" mean in this context?  Sendmail alias / expansions?  A mailing list manager, e.g. Mailman?  Something else?

    This server deals with about 5300 e-mails daily and has been very
    reliable throughout these years.

    That seems like a relatively small, thus not difficult to handle, mail load.  I've got a small VPS (1 vCPU / 2 GB RAM) that's running 20k ± 5k
    a day through Sendmail, ClamAV, SpamAssassin, and multiple other
    milters, all with no problem.

    Being concerned about the aging server and constant budget
    restrictions on this particular department, I don't predict to replace
    this server any time soon.

    Virtualizing that system shouldn't be difficult.  That would at least address the /hardware/ aspect.  The /software/ concerns still stand.

    However, eventually I could <acquire?> some used but reliable servers
    and virtualize them with Oracle KVM, as I have some experience with
    the previous oracle VM manager technology.

    Just about any virtualization technology should work.  I'm not familiar
    with anything named "Oracle KVM".  VirtualBox (from Oracle) or (Linux)
    KVM running on Oracle Linux come to mind.  But Oracle does have other
    things that play in this space.

    I know that in a High Availability environment VMs should use shared
    storage to allow live migrations, etc.

    The hosting hypervisors should have access to the same back-end storage.
     The guest VMs tend to be considerably simpler and usually only see
    what they think is directly attached storage which is not shared with
    other VMs.

    There are other fancier things that you can do with many MTA, involving
    NFS and / or clustered file systems such that various things (mail
    queues, end user mail stores, etc.) are shared between systems.  This is more complex.

    NFS has been quite simple to setup and works quite nicely with default
    settings for a small MTA, setting up clustering with NFS is a bit more
    tricky, I would more look into getting something similar to AWS EFS and
    store things there, then it's easy to take snapshots and backups should
    be done by the provider. When having the storage in the cloud, why not
    just run the VM in the cloud too, so don't have to worry about the hardware.


    Not expecting Live Migrations,

    Migrations, live / hot or cold, are a hypervisor issue and not directly related to Sendmail running in a guest VM.

    my question is if it is possible / recommended to run a sendmail VM
    with local storage, in such environment.

    I've been running Sendmail in a VM (from a VPS provider) for myself as
    well as Sendmail in multiple different VMs (VMware) for a friend for
    more than a decade.  Sendmail is none the wiser and perfectly happy to
    run in a guest VM.

    In some ways I think that running in a guest VM provides some hardware abstraction and allows you to move between different hardware with fewer problem / more simply.  As a bonus, VMs tend to mean that getting to the console is more reliable.  }:-)

    Drawback is when you want to move from one viritulization engine to
    another, even if they claim it should work, but they don't always play
    ball with each other.


    --

    //Aho

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to J.O. Aho on Thu Jan 6 19:55:16 2022
    On 1/6/22 3:11 PM, J.O. Aho wrote:
    It has extended support to July 2024, so there is plenty of time to
    migrate to a newer version of another distribution.

    Interesting. Thank you for the clarification.

    Running a VM can make it easier to experiment with dist upgrade.

    This is absolute #TRUTH!!!

    NFS has been quite simple to setup and works quite nicely with default settings for a small MTA,

    *nod*

    setting up clustering with NFS is a bit more tricky,

    Yep.

    The last time I read Red Hat's best practice on it you were only
    supposed to have the clustered file system mounted on one node at a
    time. Something about NFS makes things a lot more complicated. I don't remember the particulars.

    I would more look into getting something similar to AWS EFS and store
    things there, then it's easy to take snapshots and backups should
    be done by the provider. When having the storage in the cloud, why
    not just run the VM in the cloud too, so don't have to worry about
    the hardware.
    I guess I'm anti-cloud and want to host things on premises, so *AWS* is
    mostly a non-starter for me.

    Drawback is when you want to move from one viritulization engine to
    another, even if they claim it should work, but they don't always play
    ball with each other.

    That's where the old venerable methods of doing a full system backup in
    the old system (physical or virtual) and then doing a full system
    restore in the new system (p or v) comes into play. Thankfully this
    isn't that difficult to do with Linux.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Carvalho@21:1/5 to Grant Taylor on Fri Jan 7 01:20:19 2022
    On Friday, January 7, 2022 at 2:55:25 AM UTC, Grant Taylor wrote:
    On 1/6/22 3:11 PM, J.O. Aho wrote:
    It has extended support to July 2024, so there is plenty of time to migrate to a newer version of another distribution.
    Interesting. Thank you for the clarification.
    Running a VM can make it easier to experiment with dist upgrade.
    This is absolute #TRUTH!!!
    NFS has been quite simple to setup and works quite nicely with default settings for a small MTA,
    *nod*
    setting up clustering with NFS is a bit more tricky,
    Yep.

    The last time I read Red Hat's best practice on it you were only
    supposed to have the clustered file system mounted on one node at a
    time. Something about NFS makes things a lot more complicated. I don't remember the particulars.
    I would more look into getting something similar to AWS EFS and store things there, then it's easy to take snapshots and backups should
    be done by the provider. When having the storage in the cloud, why
    not just run the VM in the cloud too, so don't have to worry about
    the hardware.
    I guess I'm anti-cloud and want to host things on premises, so *AWS* is mostly a non-starter for me.
    Drawback is when you want to move from one viritulization engine to another, even if they claim it should work, but they don't always play ball with each other.
    That's where the old venerable methods of doing a full system backup in
    the old system (physical or virtual) and then doing a full system
    restore in the new system (p or v) comes into play. Thankfully this
    isn't that difficult to do with Linux.
    --
    Grant. . . .
    unix || die


    Hi!
    Thanks for all replies.
    Yes, ORCL6 Linux support ended in 2021, extended support (which we don't have) ends in 2024, like J.O mentioned. The hardware might not be fully compatible from the "Preupgrade Assistant analysis report" tool included in ORCL6 Linux...
    As I said, I also run an Oracle VM servers and manager with VM disks on NFS on another department. By experience, I know that it works well for most VMs. My doubt was not so much about having /home/user on NFS but /var/mail, intensively read/write. But
    since they can't afford a NAS or similar, the shared NFS filesystem probably won't be an issue.
    As for cloud, we checked prices and there's no way. This is a small public college department, so the budget is more than tight ;)
    You mentioned the tricky NFS cluster tricky situation, and that, along with the unavailable funds for NAS, is the main reason I'm more focused to keep the Guest storage local.
    Let's see how much they can afford :)
    Than you all for your time and input.
    Regards
    David

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From J.O. Aho@21:1/5 to David Carvalho on Fri Jan 7 15:59:19 2022
    On 07/01/2022 10.20, David Carvalho wrote:
    On Friday, January 7, 2022 at 2:55:25 AM UTC, Grant Taylor wrote:
    On 1/6/22 3:11 PM, J.O. Aho wrote:
    It has extended support to July 2024, so there is plenty of time to
    migrate to a newer version of another distribution.
    Interesting. Thank you for the clarification.
    Running a VM can make it easier to experiment with dist upgrade.
    This is absolute #TRUTH!!!
    NFS has been quite simple to setup and works quite nicely with default
    settings for a small MTA,
    *nod*
    setting up clustering with NFS is a bit more tricky,
    Yep.

    The last time I read Red Hat's best practice on it you were only
    supposed to have the clustered file system mounted on one node at a
    time. Something about NFS makes things a lot more complicated. I don't
    remember the particulars.
    I would more look into getting something similar to AWS EFS and store
    things there, then it's easy to take snapshots and backups should
    be done by the provider. When having the storage in the cloud, why
    not just run the VM in the cloud too, so don't have to worry about
    the hardware.
    I guess I'm anti-cloud and want to host things on premises, so *AWS* is
    mostly a non-starter for me.
    Drawback is when you want to move from one viritulization engine to
    another, even if they claim it should work, but they don't always play
    ball with each other.
    That's where the old venerable methods of doing a full system backup in
    the old system (physical or virtual) and then doing a full system
    restore in the new system (p or v) comes into play. Thankfully this
    isn't that difficult to do with Linux.
    --
    Grant. . . .
    unix || die


    Hi!
    Thanks for all replies.
    Yes, ORCL6 Linux support ended in 2021, extended support (which we don't have) ends in 2024, like J.O mentioned. The hardware might not be fully compatible from the "Preupgrade Assistant analysis report" tool included in ORCL6 Linux...

    If they have a LiveCD version, i would try that one out during your next maintenance take down of the server, if everything boots and all
    hardware is detected, then there shouldn't be any issues to upgrade.
    Just remember, a fresh backup before dist upgrade is a good thing to
    have, just in case.


    As I said, I also run an Oracle VM servers and manager with VM disks on NFS on another department. By experience, I know that it works well for most VMs. My doubt was not so much about having /home/user on NFS but /var/mail, intensively read/write.
    But since they can't afford a NAS or similar, the shared NFS filesystem probably won't be an issue.
    As for cloud, we checked prices and there's no way. This is a small public college department, so the budget is more than tight ;)

    Alternative could be building up a Ceph cluster (or any other cluster
    file system) with old hardware, just see to that have the data
    replicated, then it's not that big deal if a node dies. Sure running
    loads of old computers will cost electricity, but that may be another departments head ache. ;)


    You mentioned the tricky NFS cluster tricky situation, and that, along with the unavailable funds for NAS, is the main reason I'm more focused to keep the Guest storage local.
    Let's see how much they can afford :)

    I would go to have the storage on the host, this gives you a bit more flexibility with the VM and not to have to worry about missing mails if
    you switch from a VM to another (say you do a successful dist upgrade on
    a copy of the VM, then you can just simply share the mail storage from
    the host, otherwise you would need to run a backup restore process).


    --

    //Aho

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to J.O. Aho on Fri Jan 7 09:52:49 2022
    On 1/7/22 7:59 AM, J.O. Aho wrote:
    Just remember, a fresh backup before dist upgrade is a good thing to
    have, just in case.

    Test that backup too!

    Try restoring the backup into a new VM (connected to a different network).

    I'm serious about testing. It's much better to learn if a backup is
    working or not when you don't actually /need/ it to provide data. If it
    works and you can restore the entire system, great! If it doesn't work,
    then investigate that as it's own problem.

    Aside: I've successfully used the backup & restore method to migrate
    Linux machines across different hardware. It's /almost/ my preferred
    method to do it.

    Alternative could be building up a Ceph cluster (or any other cluster
    file system) with old hardware, just see to that have the data
    replicated, then it's not that big deal if a node dies.

    Hum. I hadn't thought about the likes of Ceph. I'd add that Coda or
    AFS could also be a contender here.

    I would go to have the storage on the host, this gives you a bit more flexibility with the VM and not to have to worry about missing mails if
    you switch from a VM to another....

    Won't that be dependent on the replication between backing stores? Or
    am I misunderstanding your recommendation?



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From J.O. Aho@21:1/5 to Grant Taylor on Fri Jan 7 21:52:27 2022
    On 07/01/2022 17.52, Grant Taylor wrote:
    On 1/7/22 7:59 AM, J.O. Aho wrote:
    Just remember, a fresh backup before dist upgrade is a good thing to
    have, just in case.

    Test that backup too!

    Try restoring the backup into a new VM (connected to a different network).

    I'm serious about testing.  It's much better to learn if a backup is
    working or not when you don't actually /need/ it to provide data.  If it works and you can restore the entire system, great!  If it doesn't work, then investigate that as it's own problem.

    Yeah, there been some corporations who failed to backup systems
    (remember Microsoft's Zune storage update, failed windows update
    resulted into a corrupt NAS system and no backup so customers lost their data)... I know company I worked for (which will not be named) where a
    system was backed up each day with a incremental backup, but someone had
    delete the initial image to regain space after monitoring system warned
    about low on disk on the backup system, when things went wrong in
    production and was time to restore data, it kind of failed... I wonder
    why ;)

    I do recommend backup and then testing the backup, should be tested from
    time to time, so a change somewhere don't cause a problem later one.



    I would go to have the storage on the host, this gives you a bit more
    flexibility with the VM and not to have to worry about missing mails
    if you switch from a VM to another....

    Won't that be dependent on the replication between backing stores?  Or
    am I misunderstanding your recommendation?

    I was just assuming David meant to have the files locally in the VM and
    if he was going for that option I thought to store them on the host
    instead, not related to the Ceph suggestion earlier.


    --

    //Aho

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to J.O. Aho on Fri Jan 7 14:59:53 2022
    On 1/7/22 1:52 PM, J.O. Aho wrote:
    I was just assuming David meant to have the files locally in the VM and
    if he was going for that option I thought to store them on the host
    instead, not related to the Ceph suggestion earlier.

    Ya.... What is local to VM / host gets rather nebulous when you start considering things like Fibre Channel / iSCSI / NBD / etc., are they
    remote or local? This applies to both the host hypervisor and the
    guest VM level. Thankfully NFS (and other NAS protocols) tend to be
    considered remote.

    Aside: You can't even really declare that Fibre Channel is remote b/c
    there are things like old SUN systems that use Fibre Channel /within/
    the main system chassis. }:-)

    Then you get into things like the host hypervisor presenting what it
    thinks are remote disks to the guest VM which thinks they are directly
    attached to it's local SCSI / IDE bus. ;-)



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)