On Fri, Sep 24, 2021 at 10:35 PM William Kenworthy <
billk@iinet.net.au>
wrote:
In going down the NUMA rabbit hole, I discovered "irqbalance". Does
anyone have an opinion on its usefulness? It is in portage.
On some multicore arm systems I am using irq affiity to steer certain
irq's to faster CPU's (network, usb) - but from what I have been reading, irqbalance can improve a mixed workload but a system with a small number of busy irq's is better served by separating and locking them to different,
more powerful processors. e.g., arm big.LITTLE architectures.
IIRC MSIs have largely addressed the issue, so irqbalance is not so useful anymore. Eg, /proc/interrupts on this system shows that the nvme drive gets
32 interrupts, and the intel gig eth card gets eight interrupts per port,
so there's no busy interrupts
<div dir="ltr"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Sep 24, 2021 at 10:35 PM William Kenworthy <<a href="mailto:
billk@iinet.net.au">
billk@iinet.net.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:
0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>In going down the NUMA rabbit hole, I discovered "irqbalance".
Does anyone have an opinion on its usefulness? It is in portage.<br>
</p>
<p>On some multicore arm systems I am using irq affiity to steer
certain irq's to faster CPU's (network, usb) - but from what I
have been reading, irqbalance can improve a mixed workload but a
system with a small number of busy irq's is better served by
separating and locking them to different, more powerful
processors. e.g., arm big.LITTLE architectures.</p></div></blockquote><div>IIRC MSIs have largely addressed the issue, so irqbalance is not so useful anymore. Eg, /proc/interrupts on this system shows that the nvme drive gets 32 interrupts, and the
intel gig eth card gets eight interrupts per port, so there's no busy interrupts<br></div></div></div>
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)