Hello list,
Not long ago I read that we should allow 2GB RAM for every emerge job - that is, we should divide our RAM size by 2 to get the maximum number of simultaneous jobs. I'm trying to get that right, but I'm not there yet.
I have these entries in make.conf:
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
MAKEOPTS="-j16"
Today, though, I saw load averages going up to 72. Can anyone suggest better values to suit my 24 threads and 64GB RAM?
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
Am Mittwoch, 15. Februar 2023, 10:56:22 CET schrieb Peter Humphrey:
Hello list,
Not long ago I read that we should allow 2GB RAM for every emerge job - that is, we should divide our RAM size by 2 to get the maximum number of simultaneous jobs. I'm trying to get that right, but I'm not there yet.
I have these entries in make.conf:
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
MAKEOPTS="-j16"
Today, though, I saw load averages going up to 72. Can anyone suggest better values to suit my 24 threads and 64GB RAM?
Maybe you are interested in this wiki article:
https://wiki.gentoo.org/wiki/User:Pietinger/Tutorials/Optimize_compile_times
Regards,
Peter
Not long ago I read that we should allow 2GB RAM for every emerge job - that is, we should divide our RAM size by 2 to get the maximum number of simultaneous jobs. I'm trying to get that right, but I'm not there yet.
I have these entries in make.conf:
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
MAKEOPTS="-j16"
Today, though, I saw load averages going up to 72. Can anyone suggest better values to suit my 24 threads and 64GB RAM?
On Wed, Feb 15, 2023 at 4:56 AM Peter Humphrey <peter@prh.myzen.co.uk>wrote:
Not long ago I read that we should allow 2GB RAM for every emerge job - that is, we should divide our RAM size by 2 to get the maximum number of simultaneous jobs. I'm trying to get that right, but I'm not there yet.
I have these entries in make.conf:
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
MAKEOPTS="-j16"
Today, though, I saw load averages going up to 72. Can anyone suggest better values to suit my 24 threads and 64GB RAM?
First, keep in mind that --jobs=16 + -j16 can result in up to 256
(16*16) tasks running at once. Of course, that is worst case and most
of the time you'll have way less than that.
Keep in mind that you need to consider available RAM and not just
total RAM. Run free under the conditions where you typically run
emerge and see how much available memory it displays. Depending on
what you have running it could be much lower than 64GB.
Beyond that, unfortunately this is hard to deal with beyond just
figuring out what needs more RAM and making exceptions in package.env.
Also, RAM pressure could also come from the build directory if it is
on tmpfs, which of course many of us use.
Some packages that I build with either a greatly reduced -j setting or
a non-tmpfs build directory are:
sys-cluster/ceph
dev-python/scipy
dev-python/pandas
app-office/calligra
net-libs/nodejs
dev-qt/qtwebengine
dev-qt/qtwebkit
dev-lang/spidermonkey
www-client/chromium
app-office/libreoffice
sys-devel/llvm
dev-lang/rust (I use the rust binary these days as this has gotten
really out of hand)
x11-libs/gtk+
These are just packages I've had issues with at some point, and it is possible that some of these packages no longer use as much memory
today.
You can have both a generic MAKEOPTS in make.conf, which suits your base
case of emerge operations and will not cause your PC to explode when
combined with EMERGE_DEFAULT_OPTS, as well as package specific MAKEOPTS in package.env to finely tune individual package requirements.
On Wednesday, 15 February 2023 13:18:24 GMT Rich Freeman wrote:
On Wed, Feb 15, 2023 at 4:56 AM Peter Humphrey <peter@prh.myzen.co.uk>
wrote:
Not long ago I read that we should allow 2GB RAM for every emerge job - that is, we should divide our RAM size by 2 to get the maximum number of simultaneous jobs. I'm trying to get that right, but I'm not there yet.
I have these entries in make.conf:
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
MAKEOPTS="-j16"
Today, though, I saw load averages going up to 72. Can anyone suggest better values to suit my 24 threads and 64GB RAM?
First, keep in mind that --jobs=16 + -j16 can result in up to 256
(16*16) tasks running at once. Of course, that is worst case and most
of the time you'll have way less than that.
Keep in mind that you need to consider available RAM and not just
total RAM. Run free under the conditions where you typically run
emerge and see how much available memory it displays. Depending on
what you have running it could be much lower than 64GB.
Beyond that, unfortunately this is hard to deal with beyond just
figuring out what needs more RAM and making exceptions in package.env.
Also, RAM pressure could also come from the build directory if it is
on tmpfs, which of course many of us use.
Some packages that I build with either a greatly reduced -j setting or
a non-tmpfs build directory are:
sys-cluster/ceph
dev-python/scipy
dev-python/pandas
app-office/calligra
net-libs/nodejs
dev-qt/qtwebengine
dev-qt/qtwebkit
dev-lang/spidermonkey
www-client/chromium
app-office/libreoffice
sys-devel/llvm
dev-lang/rust (I use the rust binary these days as this has gotten
really out of hand)
x11-libs/gtk+
These are just packages I've had issues with at some point, and it is possible that some of these packages no longer use as much memory
today.
Thank you all. I can see what I'm doing better now. (Politicians aren't the only ones who can be ambiguous!)
I'll start by picking up the point I'd missed - putting MAKEOPTS in package.env.
Hello list,
Not long ago I read that we should allow 2GB RAM for every emerge job - that is, we should divide our RAM size by 2 to get the maximum number of simultaneous jobs. I'm trying to get that right, but I'm not there yet.
I have these entries in make.conf:
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
MAKEOPTS="-j16"
Today, though, I saw load averages going up to 72. Can anyone suggest better values to suit my 24 threads and 64GB RAM?
First, keep in mind that --jobs=16 + -j16 can result in up to 256
(16*16) tasks running at once. Of course, that is worst case and most
of the time you'll have way less than that.
8
Some packages that I build with either a greatly reduced -j setting or
a non-tmpfs build directory are:
sys-cluster/ceph
dev-python/scipy
dev-python/pandas
app-office/calligra
net-libs/nodejs
dev-qt/qtwebengine
dev-qt/qtwebkit
dev-lang/spidermonkey
www-client/chromium
app-office/libreoffice
sys-devel/llvm
dev-lang/rust (I use the rust binary these days as this has gotten
really out of hand)
x11-libs/gtk+
On Wednesday, 15 February 2023 13:18:24 GMT Rich Freeman wrote:This only means that emerge would not schedule additional package job
First, keep in mind that --jobs=16 + -j16 can result in up to 256
(16*16) tasks running at once. Of course, that is worst case and most
of the time you'll have way less than that.
Yes, I was aware of that, but why didn't --load-average=32 take precedence?
On Thu, 16 Feb 2023 09:53:30 +0000
Peter Humphrey <peter@prh.myzen.co.uk> wrote:
Yes, I was aware of that, but why didn't --load-average=32 take precedence?This only means that emerge would not schedule additional package job
(where a package job means something like `emerge gcc`) when load
average > 32, howwever if a job is scheduled it's running, independently
of the current load.
While having it in MAKEOPTS, it would be handled by the make system,
which schedules single build jobs, and would stop scheduling additional
jobs, when the load is too high.
Extreme case:
emerge chromium firefox qtwebengine
--> your load when you do this is pretty much close to 0, i.e. all 3
packages are being merged simultaneously and each will be built with
-j16.
I.e. for a long time you will have about 3*16=48 single build jobs
running in parallel, i.e. you should see a load going towards 48, when
you do not have anything in your MAKEOPTS.
8 Much useful detail.
The load average setting is definitely useful and I would definitely
set it, but when the issue is swapping it doesn't go far enough. Make
has no idea how much memory a gcc process will require. Since that is
the resource likely causing problems it is hard to efficiently max out
your cores without actually accounting for memory use. The best I've
been able to do is just set things conservatively so it never gets out
of control, and underutilizes CPU in the process. Often it is only
parts of a build that even have issues - something big like chromium
might have 10,000 tasks that would run fine with -j16 or whatever, but
then there is this one part where the jobs all want a ton of RAM and
you need to run just that one part at a lower setting.
I've just looked at 'man make', from which it's clear that -j = --jobs, and that both those and --load-average are passed to /usr/bin/make, presumably untouched unless portage itself has identically named variables. So I wonder how feasible it might be for make to incorporate its own checks to ensure that
the load average is not exceeded. I am not a programmer (not for at least 35 years, anyway), so I have to leave any such suggestion to the experts.
On Thu, Feb 16, 2023 at 8:39 AM Peter Humphrey <peter@prh.myzen.co.uk> wrote:
I've just looked at 'man make', from which it's clear that -j = --jobs, and that both those and --load-average are passed to /usr/bin/make, presumably untouched unless portage itself has identically named variables. So I wonder
how feasible it might be for make to incorporate its own checks to ensure that
the load average is not exceeded. I am not a programmer (not for at least 35
years, anyway), so I have to leave any such suggestion to the experts.
Well, if we just want to have a fun discussion here are my thoughts.
However, the complexity vs usefulness outside of Gentoo is such that I
don't see it happening.
For the most typical use case - a developer building the same thing
over and over (which isn't Gentoo), then make could cache info on
resources consumed, and use that to make more educated decisions about
how many tasks to launch. That wouldn't help us at all, but it would
help the typical make user. However, the typical make user can just
tune things in other ways.
It isn't going to be possible for make to estimate build complexity in
any practical way. Halting problem aside maybe you could build in
some smarts looking at the program being executed and its arguments,
but it would be a big mess.
Something make could do is tune the damping a bit. It could gradually increase the number of jobs it runs and watch the load average, and
gradually scale it up appropriately, and gradually scale down if CPU
is the issue, or rapidly scale down if swap is the issue. If swapping
is detected it could even suspend most of the tasks it has spawned and
then gradually continue them as other tasks finish to recover from
this condition. However, this isn't going to work as well if portage
is itself spawning parallel instances of make - they'd have to talk to
each other or portage would somehow need to supervise things.
A way of thinking about it is that when you have portage spawning
multiple instances of make, that is a bit like adding gain to the --load-average MAKEOPTS. So each instance of make independently looks
at load average and takes action. So you have an output (compilers
that create load), then you sample that load with a time-weighted
average, and then you apply gain to this average, and then use that as feedback. That's basically a recipe for out of control oscillation.
You need to add damping and get rid of the gain.
Disclaimer: I'm not an engineer and I suspect a real engineer would be
able to add a bit more insight.
Really though the issue is that this is the sort of thing that only
impacts Gentoo and so nobody else is likely to solve this problem for
us.
Hello list,
Not long ago I read that we should allow 2GB RAM for every emerge job - that is, we should divide our RAM size by 2 to get the maximum number of simultaneous jobs. I'm trying to get that right, but I'm not there yet.
I have these entries in make.conf:
EMERGE_DEFAULT_OPTS="--jobs=16 --load-average=32 --autounmask=n --quiet- unmerge-warn --ke>
MAKEOPTS="-j16"
Today, though, I saw load averages going up to 72. Can anyone suggest better values to suit my 24 threads and 64GB RAM?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 303 |
Nodes: | 16 (3 / 13) |
Uptime: | 86:47:24 |
Calls: | 6,811 |
Calls today: | 3 |
Files: | 12,328 |
Messages: | 5,401,758 |