• More precision about Wait-free and Lock-free and about OneFile..

    From Wisdom91@21:1/5 to All on Mon Jul 13 07:48:22 2020
    Hello..


    More precision about Wait-free and Lock-free and about OneFile..

    I am a white arab, and i think i am smart like a "genius",
    since i have invented many scalable algorithms and there
    implementations, and today i will make you understand more:

    Look at this new invention of a PhD researcher:

    OneFile - The world's first wait-free Software Transactional Memory

    http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html

    And look carefully at this video about it by its inventor:

    Pedro Ramalhete — Wait-free data structures and wait-free transactions

    https://www.youtube.com/watch?v=oDfr9w9p8XY

    So i think that this invention is not so smart, because
    i have just read its source code here:

    https://github.com/pramalhe/OneFile/blob/master/stms/OneFileWF.hpp

    And it says the following:

    Progress condition of the readers transactions: wait-free (bounded by
    the number of threads + MAX_READ_TRIES)

    and:

    The progress condition of the writers transactions: wait-free (bounded
    by the number of threads)

    1- So as you are noticing it has a weakness and it is the same as the
    Lock-free one, and it is that it wastes CPU time on the Progress
    condition, so this wasting time is not good for scalability and
    it is then not good for parallel programming, because so that it
    "generalizes", it has also to work correctly not only for the much more read-mostly but also for other cases where the writes transactions are
    of more greater proportion "relative" to the whole of the readers and
    writers transactions, so my scalable and fair lock-based read-write
    locks are better in this regard, since read about my new invention below
    of of the Holy Grail of locks that i will use, and 'you will notice that
    it is powerful.

    2- Second weakness of the above invention, is that is is a complex
    algorithm and this complexity of it is not good, and i think that
    lock-based algorithms are much simpler , so they are easy to think about
    and to implement.

    I have invented a scalable algorithm that is a scalable fast Mutex that
    is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts to understand:

    About fair and unfair locking..

    I have just read the following lead engineer at Amazon:

    Highly contended and fair locking in Java

    https://brooker.co.za/blog/2012/09/10/locking.html

    So as you are noticing that you can use unfair locking that can have
    starvation or fair locking that is slower than unfair locking.

    I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.

    But i think that this not the good way to do, because i am an inventor
    and i have invented a scalable Fast Mutex that is much more powerful ,
    because with my scalable Fast Mutex you are capable to tune the
    "fairness" of the lock, and my Fast Mutex is capable of more than that,
    read about it on my following thoughts:

    More about research and software development..

    I have just looked at the following new video:

    Why is coding so hard...

    https://www.youtube.com/watch?v=TAAXwrgd1U8

    I am understanding this video, but i have to explain my work:

    I am not like this techlead in the video above, because i am also an
    "inventor" that has invented many scalable algorithms and there
    implementions, i am also inventing effective abstractions, i give you an example:

    Read the following of the senior research scientist that is called Dave
    Dice:

    Preemption tolerant MCS locks

    https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks

    As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why
    i have just invented a new Fast Mutex that is adaptative and that is
    much much better and i think mine is the "best", and i think you will
    not find it anywhere, my new scalable Fast Mutex has the following characteristics:

    1- Starvation-free
    2- Tunable fairness
    3- It keeps efficiently and very low its cache coherence traffic
    4- Very good fast path performance
    5- And it has a good preemption tolerance.
    6- It is faster than scalable MCS lock
    7- Not prone to convoying.


    And about composability of lock-based systems now:

    Design your systems to be composable. Among the more galling claims of
    the detractors of lock-based systems is the notion that they are somehow uncomposable:

    “Locks and condition variables do not support modular programming,”
    reads one typically brazen claim, “building large programs by gluing
    together smaller programs[:] locks make this impossible.”9 The claim, of course, is incorrect. For evidence one need only point at the
    composition of lock-based systems such as databases and operating
    systems into larger systems that remain entirely unaware of lower-level locking.

    There are two ways to make lock-based systems completely composable, and
    each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held;
    the locks used to implement the system itself are entirely behind the
    system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the
    caller with locks held, the subsystem will remain composable.

    Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be
    no global subsystem state—subsystem state must be captured in
    per-instance state, and it must be up to consumers of the subsystem to
    assure that they do not access their instance in parallel. By leaving
    locking up to the client of the subsystem, the subsystem itself can be
    used concurrently by different subsystems and in different contexts. A
    concrete example of this is the AVL tree implementation used extensively
    in the Solaris kernel. As with any balanced binary tree, the
    implementation is sufficiently complex to merit componentization, but by
    not having any global state, the implementation may be used concurrently
    by disjoint subsystems—the only constraint is that manipulation of a
    single AVL tree instance must be serialized.

    Read more here:

    https://queue.acm.org/detail.cfm?id=1454462

    And you have to look here at our DelphiConcurrent and FreepascalConcurrent:

    https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent

    Thank you,
    Amine Moulay Ramdane.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)