• More philosophy about Islam of Sufism.. (3/4)

    From World90@21:1/5 to All on Fri Apr 23 15:32:41 2021
    [continued from previous message]

    scalable algorithms and i have decided to share some of them with the
    others, and it is of course collaboration , so look for example at my
    following inventions of scalable algorithms that i have shared with
    others, here they are:

    https://sites.google.com/site/scalable68/scalable-mlock

    https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references

    https://sites.google.com/site/scalable68/scalable-rwlock

    https://sites.google.com/site/scalable68/scalable-rwlock-that-works-accross-processes-and-threads

    https://groups.google.com/forum/#!topic/comp.programming.threads/VaOo1WVACgs

    https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well

    Also i have decided to not share others of my scalable algorithms and it
    is competition, so i am seeking like a balance between collaboration and competition.

    More philosophy about what is artificial intelligence and more..

    I am a white arab, and i think i am smart since i have also invented
    many scalable algorithms and algorithms, and when you are smart you will
    easily understand artificial intelligence, this is why i am finding
    artificial intelligence easy to learn, i think to be able to understand artificial intelligence you have to understand reasoning with energy minimization, like with PSO(Particle Swarm Optimization), but
    you have to be smart since the Population based algorithm has to
    guarantee the optimal convergence, and this is why i am learning
    you how to do it(read below), i think that GA(genetic algorithm) is
    good for teaching it, but GA(genetic algorithm) doesn't guarantee the
    optimal convergence, and after learning how to do reasoning with energy minimization in artificial intelligence, you have to understand what is transfer learning in artificial intelligence with PathNet or such, this transfer learning permits to train faster and require less labeled data,
    also PathNET is much more powerful since also it is higher level
    abstraction in artificial intelligence..

    Read about it here:

    https://mattturck.com/frontierai/


    And read about PathNet here:

    https://medium.com/@thoszymkowiak/deepmind-just-published-a-mind-blowing-paper-pathnet-f72b1ed38d46


    More about artificial intelligence..

    I think one of the most important part in artificial intelligence is
    reasoning with energy minimization, it is the one that i am working on
    right now, see the following video to understand more about it:

    Yann LeCun: Can Neural Networks Reason?

    https://www.youtube.com/watch?v=YAfwNEY826I&t=250s

    I think that since i have just understood much more artificial
    intelligence, i will soon show you my next Open source software project
    that implement a powerful Parallel Linear programming solver and a
    powerful Parallel Mixed-integer programming solver with Artificial
    intelligence using PSO, and i will write an article that explain
    much more artificial intelligence and what is smartness and what is consciousness and self-awareness..

    And in only one day i have just learned "much" more artificial
    intelligence, i have read the following article about Particle Swarm Optimization and i have understood it:

    Artificial Intelligence - Particle Swarm Optimization

    https://docs.microsoft.com/en-us/archive/msdn-magazine/2011/august/artificial-intelligence-particle-swarm-optimization

    But i have just noticed that the above implementation doesn't guarantee
    the optimal convergence.

    So here is how to guarantee the optimal convergence in PSO:

    Clerc and Kennedy in (Trelea 2003) propose a constriction coefficient
    parameter selection guidelines in order to guarantee the optimal
    convergence, here is how to do it with PSO:

    v(t+1) = k*[(v(t) + (c1 * r1 * (p(t) – x(t)) + (c2 * r2 * (g(t) – x(t))]

    x(t+1) = x(t) + v(t+1)

    constriction coefficient parameter is:

    k = 2/abs(2-phi-sqrt(phi^2-(4*phi)))

    k:=2/abs((2-4.1)-(0.640)) = 0.729

    phi = c1 + c2

    To guarantee the optimal convergence use:

    c1 = c2 = 2.05

    phi = 4.1 => k equal to 0.729

    w=0.7298

    Population size = 60;


    Also i have noticed that GA(genetic algorithm) doesn't guarantee the
    optimal convergence, and SA(Simulated annealing) and Hill Climbing are
    much less powerful since they perform only exploitation.

    In general, any metaheuristic should perform two main searching
    capabilities (Exploration and Exploitation). Population based algorithms
    ( or many solutions ) such as GA, PSO, ACO, or ABC, performs both
    Exploration and Exploitation, while Single-Based Algorithm such as SA(Simulated annealing), Hill Climbing, performs the exploitation only.

    In this case, more exploitation and less exploration increases the
    chances for trapping in local optima. Because the algorithm does not
    have the ability to search in another position far from the current best solution ( which is Exploration).

    Simulated annealing starts in one valley and typically ends in the
    lowest point of the same valley. Whereas swarms start in many different
    places of the mountain range and are searching for the lowest point in
    many valleys simultaneously.

    And in my next Open source software project i will implement a powerful Parallel Linear programming solver and a powerful Parallel Mixed-integer programming solver with Artificial intelligence using PSO.


    More of my philosophy about Economies of scale..

    I have just written yesterday the following about Algeria and such
    countries:

    ------

    More political philosophy about Algeria and such countries..

    I invite you to look at this video:

    Les raisons qui expliquent la cherté de la vie et dont personne n'en
    parle en Algérie

    https://www.youtube.com/watch?v=d6B8jPLXiNc


    As you are noticing in the above video that countries such as Algeria
    lacks organization, and you will quickly notice how high is the cost of
    living in Algeria, so i am a philosopher and i will explain with my
    smartness:

    I think the best way is the following:

    Economies of scale resulted in the most affordable price of any product
    for the consumer without the manufacturer having to sacrifice profits.

    But you have to be smart, since it is a level of abstraction of what
    must happen in Algeria so that the cost of living drops, it is a level
    of abstraction that needs more precision, and here is more precision
    with my smartness:

    Look at the following video of Peter Diamandis that has the positive
    spirit that i am talking about in my thoughts below, so you have
    to look at it carefully because it is speaking about one of
    the most important thing that is exponential thinking and exponential
    progress:

    How to Think Bigger: Thinking Big and Bold | Peter Diamandis

    https://www.youtube.com/watch?v=zAVxI5wWGKU

    And read about Peter Diamandis here:

    https://en.wikipedia.org/wiki/Peter_Diamandis

    So then this brings more pricision to the above level of abstraction
    and it is that we have to think much Bigger and we have to think
    exponentially and we have to think about our exponential progress
    and this way makes us progress really fast, this is why i think that we
    have not to be pessimistic about Algeria and such countries since i
    think they will quickly know how to take advantage of this exponential
    progress of our humanity.

    ------------


    But i have to be more precise, so you have to read the following
    on Microeconomics:

    Economies of Scale

    https://courses.lumenlearning.com/suny-microeconomics/chapter/economies-of-scale/

    As you have noticed there can be a Diminishing Marginal Returns in
    Economies of Scale, but i have also talked about it by saying in my
    following thoughts:

    More of my philosophy about complexity and productivity and quality..

    So i will ask a philosophical question:


    How to manage efficiently complexity ?


    I think you can manage complexity by the “divide and rule” approach
    to management, which also leads to hierarchical division of large organisations, or wich also leads to the Division of "labour", you can
    read more about the Division of labour here:


    https://en.wikipedia.org/wiki/Division_of_labour


    Also you can manage complexity by using constraints, such as laws, road
    rules and commercial standards, all of which limit the potential for
    harmful interactions to occur, also you can manage complexity by using
    higher layers of abstraction such as in computer programming, and we can
    also follow the efficient rule of: "Do less and do it better" that can
    also use higher level layers of abstraction to enhance productivity and quality, this rule is good for productivity and quality, and about productivity: I have also just posted about the following thoughts from
    the following PhD computer scientist:


    https://lemire.me/blog/about-me/


    Read more here his thoughts about productivity:


    https://lemire.me/blog/2012/10/15/you-cannot-scale-creativity/


    And i think he is making a mistake:


    Since we have that Productivity = Output/Input


    But better human training and/or better tools and/or better human
    smartness and/or better human capacity can make the Parallel
    productivity part much bigger that the Serial productivity part, so it
    can scale much more (it is like Gustafson's Law).


    And it looks like the following:


    About parallelism and about Gustafson’s Law..


    Gustafson’s Law:


    • If you increase the amount of work done by each parallel
    task then the serial component will not dominate
    • Increase the problem size to maintain scaling
    • Can do this by adding extra complexity or increasing the overall
    problem size


    Scaling is important, as the more a code scales the larger a machine it
    can take advantage of:


    • can consider weak and strong scaling
    • in practice, overheads limit the scalability of real parallel programs
    • Amdahl’s law models these in terms of serial and parallel fractions
    • larger problems generally scale better: Gustafson’s law


    Load balance is also a crucial factor.


    So read my following thoughts about the Threadpool to notice that my
    Threadpool that scales very well does Load balance well:


    ---


    About the Threadpool..


    I have just read the following:


    Concurrency - Throttling Concurrency in the CLR 4.0 ThreadPool


    https://docs.microsoft.com/en-us/archive/msdn-magazine/2010/september/concurrency-throttling-concurrency-in-the-clr-4-0-threadpool


    But i think that both the methodologies from Microsoft of the Hill
    Climbing and of the Control Theory using band pass filter or match
    filter and discrete Fourier transform have a weakness, there weakness is
    that they are "localized" optimization that maximize the throughput , so
    they are not fair, so i don't think i will implement them, so then you
    can use my following invention of an efficient Threadpool engine with priorities that scales very well (and you can use a second Threadpool
    for IO etc.):


    https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well


    And here is my other Threadpool engine with priorities:


    https://sites.google.com/site/scalable68/threadpool-e