[continued from previous message]
can use my following invention of an efficient Threadpool engine with priorities that scales very well (and you can use a second Threadpool
for IO etc.):
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
And here is my other Threadpool engine with priorities:
https://sites.google.com/site/scalable68/threadpool-engine-with-priorities
And read my following previous thoughts to understand more:
About the strategy of "work depth-first; steal breadth-first"..
I have just read the following webpage:
Why Too Many Threads Hurts Performance, and What to do About It
https://www.codeguru.com/cpp/sample_chapter/article.php/c13533/Why-Too-Many-Threads-Hurts-Performance-and-What-to-do-About-It.htm
Also I have just looked at the following interesting video about Go
scheduler and Go concurrency:
Dmitry Vyukov — Go scheduler: Implementing language with lightweight concurrency
https://www.youtube.com/watch?v=-K11rY57K7k
And i have just read the following webpage about the Threadpool of
microsoft .NET 4.0:
https://blogs.msdn.microsoft.com/jennifer/2009/06/26/work-stealing-in-net-4-0/
And as you are noticing the first web link above is speaking about the
strategy of "work depth-first; steal breadth-first" , but we have to be
more smart because i think that this strategy, that is advantageous for
cache locality, works best for recursive algorithms, because a thread is
taking the first task and after that the algorithm is recursive, so it
will put the childs tasks inside the local work-stealing queue, and the
other threads will start to take from the work-stealing queue, so the
work will be distributed correctly, but as you will notice that this
strategy works best for recursive algorithms, but when you you
iteratively start many tasks, i think we will have much more contention
on the work-stealing queue and this is a weakness of this strategy,
other than that when it is not a recursive algorithm and the threads are receiving from the global queue so there will be high contention on the
global queue and this is not good. MIT's Cilk and Go scheduler and the Threadpool of Microsoft and Intel® C++ TBB are using this strategy of
"work depth-first; steal breadth-first". And as you are noticing that
they are giving more preference to cache locality than scalability.
But in my following invention of a Threadpool that scales very well i am
giving more preference to scalability than to cache locality:
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
Other than that when you are doing IO with my Threadpool, you can use asychronous IO by starting a dedicated thread to IO to be more
efficient, or you can start another of my Threadpool and use it for
tasks that uses IO, you can use the same method when threads of the my Threadpool are waiting or sleeping..
Other than that for recursion and the stack overflow problem you can
convert your function from a recursive to iterative to solve the problem
of stack overflow.
Other than that to be able to serve a great number of internet
connections or TCP/IP socket connections you can use my Threadpool with
my powerful Object oriented Stackful coroutines library for Delphi and FreePascal here:
https://sites.google.com/site/scalable68/object-oriented-stackful-coroutines-library-for-delphi-and-freepascal
---
And enhancing productivity is also related to my following powerful
product that i have designed and implemented(that can also be applied to organizations):
https://sites.google.com/site/scalable68/universal-scalability-law-for-delphi-and-freepascal
Please read the following about Applying the Universal Scalability Law
to organisations:
https://blog.acolyer.org/2015/04/29/applying-the-universal-scalability-law-to-organisations/
Yet more philosophy about quality control and quality..
So first you have to define quality(read below about it) and second you
have to construct quality and third you have to control quality.
So, I have just read the following about the Central Limit Theorem (I understood it), i invite you to read it carefully:
https://www.probabilitycourse.com/chapter7/7_1_2_central_limit_theorem.php
So as you are noticing this Central Limit Theorem is so important for
quality control, read the following to notice it(I also understood
Statistica