• More of my philosophy about composability and Haskell functional langua

    From Amine Moulay Ramdane@21:1/5 to All on Wed Dec 1 13:17:08 2021
    Hello,


    More of my philosophy about composability and Haskell functional language and more..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..

    It is my last post here in this ADA newsgroup, so read it carefully:

    I have just read quickly the following article about composability,
    so i invite you to read it carefully:

    https://bartoszmilewski.com/2014/06/09/the-functional-revolution-in-c/

    I am not in accordance with the above article, and i think that the above scientist is programming in Haskell functional language and it is for him the way to composability, since he says that the way of functional programming like Haskell functional
    programming is the
    the way that allows composability in presence of concurrency, but for him lock-based systems don't allow it, but i don't agree with him, and i will give you the logical proof of it, and here it is, read what is saying an article from ACM that was
    written by both Bryan M. Cantrill and Jeff Bonwick from Sun Microsystems:

    You can read about Bryan M. Cantrill here:

    https://en.wikipedia.org/wiki/Bryan_Cantrill

    And you can read about Jeff Bonwick here:

    https://en.wikipedia.org/wiki/Jeff_Bonwick

    And here is what says the article about composability in the presence of concurrency of lock-based systems:

    "Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:

    “Locks and condition variables do not support modular programming,” reads one typically brazen claim, “building large programs by gluing together smaller programs[:] locks make this impossible.”9 The claim, of course, is incorrect. For evidence
    one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.

    There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to
    user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between
    software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.

    Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be
    no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the
    subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is
    sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized."

    Read more here:

    https://queue.acm.org/detail.cfm?id=1454462

    More about Channels, Concurrency and lightweight tasks and more..

    I think i am smart and i am like a software architect, and i am
    an inventor of many software scalable algorithms and algorithms, so
    i will continu to explain more:

    From a software architecting point of view, i think that Go programming language from Google has made some big software architecting errors, for example you can take a look at the architecting of channels in Go
    programming language from the source code and you will notice that the mutex of the channels is spinning about 1 ms or 2 ms, and it is not good, since it is not so efficient, since it is by analogy like when you have to prevent too much cache-coherence
    traffic by using queue locks, i mean that the mutex of the channels in Go language is spinning for 1 ms or 2 ms and it is not good since it makes the thing slow, since thread switching is slow and it is also not like convoy resistant, and it is not the
    only disadvantage, i will talk about the other disadvantages of Go and more in my next book about parallel programming and concurrency that i will sell, so i think that Go and Rust languages are not so efficient, read all my following thoughts so
    that to understand more:

    I have just looked quickly at the following video of Concurrency and Channels and lightweight tasks, and i invite you to look
    at it:

    Andy Wingo - Channels, Concurrency, and Cores: A new Concurrent ML implementation

    https://www.youtube.com/watch?v=7IcI6sl5oBc

    I think i am smart and i think that the above solution
    in the above video has the same problem as Go language with channels and light goroutines from Google, since i have looked at Go and is using mutexes in its implementation of Channels that
    are not so efficient, so the problem is that it is by logical analogy
    like the problem of a "monopolization" that is not so business friendly,
    since we can think that the others can still invent queues or locks or mutexes that are much more efficient, so how can they sell it "advantageously" to people around the world if Google or other compagnies are monopolizing with there Go language and
    such. So i think that this types of Concurrency with lightweight tasks and with channels that is included directly inside a language has its disadvantage that it becomes a not business friendly.

    More of my philosophy about the new Rust language and about CSP(Communicating sequential processes) and more..

    I think i am smart, and i will say that the new programming language that we call Rust is not so efficient, since the problem
    is that it is using Rust compiler that comes with a race detector that is NP-hard or NP-complete, so it is not efficient at all in term of time complexity and it also means that Rust compiler will never know in race detection which of races are real race
    conditions(Read here in the following paper so that to understand it: https://www.sjsu.edu/people/robert.chun/courses/cs159/s3/Y.pdf, and, this is the reason why it is also difficult to use a tool to find race conditions accurately, C++
    has the same problem as Rust, so the best way is to use Channels and processes like in superpascal or Channels and "lightweight" tasks like goroutines in Go so that to avoid race conditions and such parallel programming bugs like in the following example
    of Go that shows how to avoid race conditions:

    https://fodor.org/blog/go-avoiding-race-conditions/

    And i think that Go and Rust have another disadvantage and here it is:

    More of my philosophy about the too much purism philosophy of Rust and more..

    I think i am smart, and i think that Rust compiler and language is too much "purist", it looks like the too much purism of Haskell functional programming language, since i say that Rust and Go don't provide us with OOP inheritance, but it is too much
    restrictive, and it is the deficiency of Rust, since inheritance has advantages and disadvantages, so we have to balance well and provide also with inheritance so that to be efficient, so i think that C++ and C# are better than Rust in this regard, and
    here is the advantages and disadvantages of OOP inheritance:

    https://www.ianswer4u.com/2017/09/oops-inheritance-advantages.html

    As a software developer you have to become more efficient and productive. So you need to make sure the code you write is easily reusable and maintainable. And, among other things, this is what inheritance gives you - the ability to reuse without
    reinventing the wheel, as well as the ability to easily maintain your base object without having to perform maintenance on all similar objects.

    More of my philosophy about Superpascal and about CSP(Communicating sequential processes) and more..

    I think i am smart, and i am also programming in Object Pascal
    of Delphi and Freepascal, and i think i am also a smart "Wirthian" programmer of the Wirthian familly of ALGOL-like languages, since i have programmed in Pascal and i have also programmed in Superpascal(You can read about it here: https://en.wikipedia.
    org/wiki/SuperPascal), and
    i have programmed in Object Pascal of Delphi and Freepascal, and i know more about Superpascal, that was an interesting enhancement of the pascal language, that brought an enhancement in a form of a "Forall" statement that is like a Parallel For loop,
    and that brought an
    enhancement in a form of "Channels" that look like Go channels and that permit to code parallel programs, so the Superpascal channels allowed us to program like in CSP(Communicating sequential processes) that is a formal language for describing patterns
    of interaction in concurrent systems. And CSP(Communicating sequential processes) is a member of the family of mathematical theories of concurrency known as process algebras, or process calculi, based on message passing via channels, so Superpascal
    Channels allowed us to avoid parallel bugs such as race conditions, but i think that those channels can also be used in a more simple way like in the following article, so that they permit to avoid race conditions and that's also i think a much better
    enhancement, so read the following article so that to know about the more simple way of using Go channels or Superpascal channels so that to avoid race conditions:

    https://fodor.org/blog/go-avoiding-race-conditions/

    And so that you get an idea about Superpascal, you can look
    at its source code in Freepascal here in Gitub:

    https://github.com/octonion/superpascal

    So as you notice that Superpascal programming language, that was invented in year 1993, has preceded Go programming language by providing Channels etc. that permit to do parallel programming by avoiding race conditions and such parallel programming bugs.

    But you have to know that i am smart and i have also enhanced
    Object Pascal of Freepascal and Delphi by inventing the following
    Threadpool that scales well and that supports parallel for loop,
    you can read about it carefully here in my websites:

    https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well

    And i have also enhanced Object Pascal of Freepascal and Delphi by
    inventing a Scalable reference counting with efficient support for weak references, you can take a look carefully about it here in my websites:

    https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references

    So as you notice that i am also an inventor of many scalable algorithms
    and algorithms..

    More of my philosophy about stack memory allocations and about preemptive and non-preemptive timesharing..

    I think i am smart, and as you are noticing in my below thoughts that
    i am abstracting smartly so that to make you understand preemptive and non-preemptive timesharing , other than that i will also give you
    an interesting Stack memory allocation algorithm in Delphi and Freepascal so that to use it smartly with my below sophisticated Stackful coroutines Library, so i will extend my sophisticated Stackful coroutines Library so that to support it smartly, and
    here it is:

    --

    var pool: array [1..limit] of integer;
    memory: array [min..max] of integer;
    top: integer;


    procedure initialize;

    var index: integer;

    begin
    for index := 1 to limit do
    pool[index] := empty;
    top := min − 1
    end;

    procedure allocate( index, length: integer; var address: integer);

    begin

    address := pool[index];
    if address <> empty then
    pool[index] := memory[address]
    else
    begin
    address := top + 1;
    top := top + length;
    if not (top <= max)
    then raise Exception.Create('Stack overflow..')

    end
    end;

    procedure release( index, address: integer);
    begin
    memory[address] := pool[index];
    pool[index] := address
    end;

    --


    More of my philosophy about about the paper and about preemptive and non-preemptive timesharing and more..

    I have just forgotten to post about who has written the following
    paper about cooperative and preemptive tasking:

    https://users.ece.cmu.edu/~koopman/pubs/koopman90_HeavyweightTasking.pdf

    Here is the Professo