[continued from previous message]
can scale much more (it is like Gustafson's Law).
And it looks like the following:
About parallelism and about Gustafson’s Law..
Gustafson’s Law:
• If you increase the amount of work done by each parallel
task then the serial component will not dominate
• Increase the problem size to maintain scaling
• Can do this by adding extra complexity or increasing the overall
problem size
Scaling is important, as the more a code scales the larger a machine it
can take advantage of:
• can consider weak and strong scaling
• in practice, overheads limit the scalability of real parallel programs
• Amdahl’s law models these in terms of serial and parallel fractions
• larger problems generally scale better: Gustafson’s law
Load balance is also a crucial factor.
So read my following thoughts about the Threadpool to notice that my
Threadpool that scales very well does Load balance well:
---
About the Threadpool..
I have just read the following:
Concurrency - Throttling Concurrency in the CLR 4.0 ThreadPool
https://docs.microsoft.com/en-us/archive/msdn-magazine/2010/september/concurrency-throttling-concurrency-in-the-clr-4-0-threadpool
But i think that both the methodologies from Microsoft of the Hill
Climbing and of the Control Theory using band pass filter or match
filter and discrete Fourier transform have a weakness, there weakness is
that they are "localized" optimization that maximize the throughput ,
so they are not fair, so i don't think i will implement them, so then
you can use my following invention of an efficient Threadpool engine
with priorities that scales very well (and you can use a second
Threadpool for IO etc.):
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
And here is my other Threadpool engine with priorities:
https://sites.google.com/site/scalable68/threadpool