Hi and happy new year.
I manage a Sendmail server running in an Oracle Linux 6 (old hardware),
which also manages some mailing lists.
This server deals with about 5300 e-mails daily and has been very
reliable throughout these years.
Being concerned about the aging server and constant budget restrictions
on this particular department, I don't predict to replace this
server any time soon.
However, eventually I could <acquire?> some used but reliable servers
and virtualize them with Oracle KVM, as I have some experience with
the previous oracle VM manager technology.
I know that in a High Availability environment VMs should use shared
storage to allow live migrations, etc.
Not expecting Live Migrations,
my question is if it is possible / recommended to run a sendmail
VM with local storage, in such environment.
If by any chance one of these servers running the Oracle KVM would
fault, I guess it would be relatively easy to migrate it to another
server, with a recent backup, expecting some lost e-mails from the
previous hours. Does anyone run Sendmail this way or in a cluster?
Thanks and regards.
On 1/6/22 7:29 AM, David Carvalho wrote:
Hi and happy new year.
Hi,
I manage a Sendmail server running in an Oracle Linux 6 (old hardware),
Isn't Oracle Linux based on RHEL 6 / CentOS 6? I believe that's quite
old software too. -- I think that I'd be more worried about the old software than I would be about the old hardware.
which also manages some mailing lists.
What does "mailing list" mean in this context? Sendmail alias / expansions? A mailing list manager, e.g. Mailman? Something else?
This server deals with about 5300 e-mails daily and has been very
reliable throughout these years.
That seems like a relatively small, thus not difficult to handle, mail load. I've got a small VPS (1 vCPU / 2 GB RAM) that's running 20k ± 5k
a day through Sendmail, ClamAV, SpamAssassin, and multiple other
milters, all with no problem.
Being concerned about the aging server and constant budget
restrictions on this particular department, I don't predict to replace
this server any time soon.
Virtualizing that system shouldn't be difficult. That would at least address the /hardware/ aspect. The /software/ concerns still stand.
However, eventually I could <acquire?> some used but reliable servers
and virtualize them with Oracle KVM, as I have some experience with
the previous oracle VM manager technology.
Just about any virtualization technology should work. I'm not familiar
with anything named "Oracle KVM". VirtualBox (from Oracle) or (Linux)
KVM running on Oracle Linux come to mind. But Oracle does have other
things that play in this space.
I know that in a High Availability environment VMs should use shared
storage to allow live migrations, etc.
The hosting hypervisors should have access to the same back-end storage.
The guest VMs tend to be considerably simpler and usually only see
what they think is directly attached storage which is not shared with
other VMs.
There are other fancier things that you can do with many MTA, involving
NFS and / or clustered file systems such that various things (mail
queues, end user mail stores, etc.) are shared between systems. This is more complex.
Not expecting Live Migrations,
Migrations, live / hot or cold, are a hypervisor issue and not directly related to Sendmail running in a guest VM.
my question is if it is possible / recommended to run a sendmail VM
with local storage, in such environment.
I've been running Sendmail in a VM (from a VPS provider) for myself as
well as Sendmail in multiple different VMs (VMware) for a friend for
more than a decade. Sendmail is none the wiser and perfectly happy to
run in a guest VM.
In some ways I think that running in a guest VM provides some hardware abstraction and allows you to move between different hardware with fewer problem / more simply. As a bonus, VMs tend to mean that getting to the console is more reliable. }:-)
It has extended support to July 2024, so there is plenty of time to
migrate to a newer version of another distribution.
Running a VM can make it easier to experiment with dist upgrade.
NFS has been quite simple to setup and works quite nicely with default settings for a small MTA,
setting up clustering with NFS is a bit more tricky,
I would more look into getting something similar to AWS EFS and storeI guess I'm anti-cloud and want to host things on premises, so *AWS* is
things there, then it's easy to take snapshots and backups should
be done by the provider. When having the storage in the cloud, why
not just run the VM in the cloud too, so don't have to worry about
the hardware.
Drawback is when you want to move from one viritulization engine to
another, even if they claim it should work, but they don't always play
ball with each other.
On 1/6/22 3:11 PM, J.O. Aho wrote:
It has extended support to July 2024, so there is plenty of time to migrate to a newer version of another distribution.Interesting. Thank you for the clarification.
Running a VM can make it easier to experiment with dist upgrade.This is absolute #TRUTH!!!
NFS has been quite simple to setup and works quite nicely with default settings for a small MTA,*nod*
setting up clustering with NFS is a bit more tricky,Yep.
The last time I read Red Hat's best practice on it you were only
supposed to have the clustered file system mounted on one node at a
time. Something about NFS makes things a lot more complicated. I don't remember the particulars.
I would more look into getting something similar to AWS EFS and store things there, then it's easy to take snapshots and backups shouldI guess I'm anti-cloud and want to host things on premises, so *AWS* is mostly a non-starter for me.
be done by the provider. When having the storage in the cloud, why
not just run the VM in the cloud too, so don't have to worry about
the hardware.
Drawback is when you want to move from one viritulization engine to another, even if they claim it should work, but they don't always play ball with each other.That's where the old venerable methods of doing a full system backup in
the old system (physical or virtual) and then doing a full system
restore in the new system (p or v) comes into play. Thankfully this
isn't that difficult to do with Linux.
--
Grant. . . .
unix || die
On Friday, January 7, 2022 at 2:55:25 AM UTC, Grant Taylor wrote:
On 1/6/22 3:11 PM, J.O. Aho wrote:
It has extended support to July 2024, so there is plenty of time toInteresting. Thank you for the clarification.
migrate to a newer version of another distribution.
Running a VM can make it easier to experiment with dist upgrade.This is absolute #TRUTH!!!
NFS has been quite simple to setup and works quite nicely with default*nod*
settings for a small MTA,
setting up clustering with NFS is a bit more tricky,Yep.
The last time I read Red Hat's best practice on it you were only
supposed to have the clustered file system mounted on one node at a
time. Something about NFS makes things a lot more complicated. I don't
remember the particulars.
I would more look into getting something similar to AWS EFS and storeI guess I'm anti-cloud and want to host things on premises, so *AWS* is
things there, then it's easy to take snapshots and backups should
be done by the provider. When having the storage in the cloud, why
not just run the VM in the cloud too, so don't have to worry about
the hardware.
mostly a non-starter for me.
Drawback is when you want to move from one viritulization engine toThat's where the old venerable methods of doing a full system backup in
another, even if they claim it should work, but they don't always play
ball with each other.
the old system (physical or virtual) and then doing a full system
restore in the new system (p or v) comes into play. Thankfully this
isn't that difficult to do with Linux.
--
Grant. . . .
unix || die
Hi!
Thanks for all replies.
Yes, ORCL6 Linux support ended in 2021, extended support (which we don't have) ends in 2024, like J.O mentioned. The hardware might not be fully compatible from the "Preupgrade Assistant analysis report" tool included in ORCL6 Linux...
As I said, I also run an Oracle VM servers and manager with VM disks on NFS on another department. By experience, I know that it works well for most VMs. My doubt was not so much about having /home/user on NFS but /var/mail, intensively read/write.But since they can't afford a NAS or similar, the shared NFS filesystem probably won't be an issue.
As for cloud, we checked prices and there's no way. This is a small public college department, so the budget is more than tight ;)
You mentioned the tricky NFS cluster tricky situation, and that, along with the unavailable funds for NAS, is the main reason I'm more focused to keep the Guest storage local.
Let's see how much they can afford :)
Just remember, a fresh backup before dist upgrade is a good thing to
have, just in case.
Alternative could be building up a Ceph cluster (or any other cluster
file system) with old hardware, just see to that have the data
replicated, then it's not that big deal if a node dies.
I would go to have the storage on the host, this gives you a bit more flexibility with the VM and not to have to worry about missing mails if
you switch from a VM to another....
On 1/7/22 7:59 AM, J.O. Aho wrote:
Just remember, a fresh backup before dist upgrade is a good thing to
have, just in case.
Test that backup too!
Try restoring the backup into a new VM (connected to a different network).
I'm serious about testing. It's much better to learn if a backup is
working or not when you don't actually /need/ it to provide data. If it works and you can restore the entire system, great! If it doesn't work, then investigate that as it's own problem.
I would go to have the storage on the host, this gives you a bit more
flexibility with the VM and not to have to worry about missing mails
if you switch from a VM to another....
Won't that be dependent on the replication between backing stores? Or
am I misunderstanding your recommendation?
I was just assuming David meant to have the files locally in the VM and
if he was going for that option I thought to store them on the host
instead, not related to the Ceph suggestion earlier.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 221:47:32 |
Calls: | 6,623 |
Calls today: | 5 |
Files: | 12,171 |
Messages: | 5,318,103 |