• More of my philosophy about decency and more of my thoughts.. (2/3)

    From Amine Moulay Ramdane@21:1/5 to All on Thu Nov 3 20:30:45 2022
    [continued from previous message]

    the following paper about ThreadSanitizer:

    https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35604.pdf


    And it says in the conclusion the following:

    "ThreadSanitizer uses a new algorithm; it has several modes of operation, ranging from the most conservative mode (which has few false positives but also misses real races) to a very aggressive one (which
    has more false positives but detects the largest number of
    real races)."

    So as you are noticing since the very agressive mode doesn't detect
    all the data races, so it misses a really small number of real races , so it is like a very high probability of really detecting real races ,
    and i think that you can also use my below methodology of using incrementally a model from the source code and using Spin model checker so that to higher even more the probability of detecting real races.


    Read my previous thoughts:

    More of my philosophy about race conditions and about composability and more of my thoughts..

    I say that a model is a representation of something. It captures not all attributes of the represented thing, but rather only those seeming relevant. So my way of doing in software development in Delphi and Freepascal is also that i am using a "model"
    from the source code that i am executing in Spin model checking so that to detect race conditions, so i invite you to take a look at the following new tutorial that uses the powerful Spin tool:

    https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html

    So you can for example install Spin model checker so that to detect race conditions, this is how you will get much more professional at detecting deadlocks and race conditions in parallel programming. And i invite you to look at the following video so
    that to know how to install Spin model checker on windows:

    https://www.youtube.com/watch?v=MGzmtWi4Oq0

    More of my philosophy about race detection and concurrency and more..

    I have just looked quickly at different race detectors, and i think that
    the Intel Thread Checker from Intel company from "USA" is also very good since the Intel Thread Checker needs to instrument either the C++ source code or the compiled binary to make every memory reference and every standard Win32 synchronization
    primitive observable, so this instrumentation from the source code is very good since it also permits me to port my scalable algorithms inventions by for example wrapping them in some native Windows synchronization APIs, and this instrumentation from the
    source code is also business friendly, so read about different race detectors and about Intel Thread Checker here:

    https://docs.microsoft.com/en-us/archive/msdn-magazine/2008/june/tools-and-techniques-to-identify-concurrency-issues

    So i think that the other race detectors of other programming languages have to provide this instrumentation from the source code as Intel Thread Checker from Intel company from "USA".

    More of my philosophy about Rust and about memory models and about technology and more of my thoughts..


    I think i am highly smart, and i say that the new programming language that we call Rust has an important problem, since read the following interesting article that says that atomic operations that have not correct memory ordering can still cause race
    conditions in safe code, this is why the suggestion made by the researchers is:

    "Race detection techniques are needed for Rust, and they should focus on unsafe code and atomic operations in safe code."


    Read more here:

    https://www.i-programmer.info/news/98-languages/12552-is-rust-really-safe.html


    More of my philosophy about programming languages about lock-based systems and more..

    I think we have to be optimistic about lock-based systems, since race conditions detection can be done in polynomial-time, and you can notice it by reading the following paper:

    https://arxiv.org/pdf/1901.08857.pdf

    Or by reading the following paper:

    https://books.google.ca/books?id=f5BXl6nRgAkC&pg=PA421&lpg=PA421&dq=race+condition+detection+and+polynomial+complexity&source=bl&ots=IvxkORGkQ9&sig=ACfU3U2x0fDnNLHP1Cjk5bD_fdJkmjZQsQ&hl=en&sa=X&ved=2ahUKEwjKoNvg0MP0AhWioXIEHRQsDJc4ChDoAXoECAwQAw#v=
    onepage&q=race%20condition%20detection%20and%20polynomial%20complexity&f=false

    So i think we can continu to program in lock-based systems, and about composability of lock-based systems, read my following thoughts about it it:

    More of my philosophy about composability and about Haskell functional language and more..

    I have just read quickly the following article about composability,
    so i invite you to read it carefully:

    https://bartoszmilewski.com/2014/06/09/the-functional-revolution-in-c/

    I am not in accordance with the above article, and i think that the above scientist is programming in Haskell functional language and it is for him the way to composability, since he says that the way of functional programming like Haskell functional
    programming is the
    the way that allows composability in presence of concurrency, but for him lock-based systems don't allow it, but i don't agree with him, and i will give you the logical proof of it, and here it is, read what is saying an article from ACM that was written
    by both Bryan M. Cantrill and Jeff Bonwick from Sun Microsystems:

    You can read about Bryan M. Cantrill here:

    https://en.wikipedia.org/wiki/Bryan_Cantrill

    And you can read about Jeff Bonwick here:

    https://en.wikipedia.org/wiki/Jeff_Bonwick

    And here is what says the article about composability in the presence of concurrency of lock-based systems:

    "Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:

    “Locks and condition variables do not support modular programming,” reads one typically brazen claim, “building large programs by gluing together smaller programs[:] locks make this impossible.”9 The claim, of course, is incorrect. For evidence
    one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.

    There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to
    user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between
    software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.

    Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be
    no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the
    subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is
    sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized."

    Read more here:

    https://queue.acm.org/detail.cfm?id=1454462

    More of my philosophy about HP and about the Tandem team and more of my thoughts..


    I invite you to read the following interesting article so that
    to notice how HP was smart by also acquiring Tandem Computers, Inc.
    with there "NonStop" systems and by learning from the Tandem team
    that has also Extended HP NonStop to x86 Server Platform, you can read about it in my below writing and you can read about Tandem Computers here: https://en.wikipedia.org/wiki/Tandem_Computers , so notice that Tandem Computers, Inc. was the dominant
    manufacturer of fault-tolerant computer systems for ATM networks, banks, stock exchanges, telephone switching centers, and other similar commercial transaction processing applications requiring maximum uptime and zero data loss:

    https://www.zdnet.com/article/tandem-returns-to-its-hp-roots/

    More of my philosophy about HP "NonStop" to x86 Server Platform fault-tolerant computer systems and more..

    Now HP to Extend HP NonStop to x86 Server Platform

    HP announced in 2013 plans to extend its mission-critical HP NonStop technology to x86 server architecture, providing the 24/7 availability required in an always-on, globally connected world, and increasing customer choice.

    Read the following to notice it:

    https://www8.hp.com/us/en/hp-news/press-release.html?id=1519347#.YHSXT-hKiM8

    And today HP provides HP NonStop to x86 Server Platform, and here is
    an example, read here:

    https://www.hpe.com/ca/en/pdfViewer.html?docId=4aa5-7443&parentPage=/ca/en/products/servers/mission-critical-servers/integrity-nonstop-systems&resourceTitle=HPE+NonStop+X+NS7+%E2%80%93+Redefining+continuous+availability+and+scalability+for+x86+data+sheet

    So i think programming the HP NonStop for x86 is now compatible with x86 programming.

    And i invite you to read my thoughts about technology here:

    https://groups.google.com/g/soc.culture.usa/c/N_UxX3OECX4


    More of my philosophy about stack allocation and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just looked at the x64 assembler
    of the C/C++ _alloca function that allocates size bytes of space from the Stack and it uses x64 assembler instructions to move RSP register and i think that it also aligns the address and it ensures that it doesn't go beyond the stack limit etc., and i
    have quickly understood the x64 assembler of it, and i invite you to look at it here:

    64-bit _alloca. How to use from FPC and Delphi?

    https://www.atelierweb.com/64-bit-_alloca-how-to-use-from-delphi/


    But i think i am smart and i say that the benefit of using a stack comes mostly from "reusability" of the stack, i mean it is done this way
    since you have for example from a thread to execute other functions or procedures and to exit from those functions of procedures and this exiting from those functions or procedures makes the memory of stack available again for "reusability", and it is
    why i think that using a dynamic allocated array as a stack is also useful since it also offers those benefits of reusability of the stack and i think that dynamic allocation of the array will not be expensive, so it is why i think i will implement _
    alloca function using a dynamic allocated array and i think it will also be good for my sophisticated coroutines library that you can read about it from my following thoughts about preemptive and non-preemptive timesharing in the following web link:


    https://groups.google.com/g/alt.culture.morocco/c/JuC4jar661w


    And i invite you to read my thoughts about technology here:

    https://groups.google.com/g/soc.culture.usa/c/N_UxX3OECX4


    More of my philosophy about the German model and about quality and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, so i will ask the following philosophical question of:


    Why is Germany so successful in spite of least working hours?


    So i think one of the most important factors are:


    Of course the first factor is that Germany has good schools and vocational training - for everyone. This makes the average worker much more productive in terms of value add per hour.

    And the second "really" important factor is the following:

    It’s in the culture of Germany to focus on quality and being effective (all the way back to Martin Luther and his protestant work ethic)... Higher quality in every step of the chain leads to a massive reduction in defects and rework. This increases
    everyone’s productivity. But notice that i am also speaking in my below thoughts about the other ways to increase productivity by being specialization etc., and the way of the German model to focus on quality and being effective by also focusing on
    quality in every step of the chain that leads to a massive reduction in defects and rework, is also done by the following methodologies of quality control and Six Sigma etc., so read my following thoughts about them:

    More of my philosophy about quality control and more of my thoughts..

    I have just looked and understood quickly the following paper about SPC(Statistical process control):

    https://owic.oregonstate.edu/sites/default/files/pubs/EM8733.pdf


    I think i am highly smart, but i think that the above paper doesn't speak about the fact that you can apply the central limit theorem as following:

    The central limit theorem states that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough.

    Also the above paper doesn't speak about the following very important things:

    And I have quickly understood quality control with SPC(Statistical process control) and i have just discovered a smart pattern with my fluid intelligence and it is that with SPC(Statistical process control) we can debug the process, like in software
    programming, by looking at its variability, so if the variability doesn't follow a normal distribution, so it means that there are defects in the process, and we say that there is special causes that causes those defects, and if the variability follows a
    normal distribution, we say that the process is stable and it has only common causes, and it means that we can control it much more easily by looking at the control charts that permit to debug and control the variability by for example changing the
    machines or robots and looking at the control charts and measuring again with the control charts

    More of my philosophy about the Post Graduate Program on lean Six Sigma and more..

    More of my philosophy about Six Sigma and more..

    I think i am smart, and now i will talk more about Six Sigma
    since i have just talked about SPC(Statistical quality control), so
    you have to know that Six Sigma needs to fulfill the following steps:

    1- Define the project goals and customer (external and internal)
    deliverables.

    2- Control future performance so improved process doesn't degrade.

    3- Measure the process so that to determine current performance and
    quantify the problem.

    4- Analyze and determine the root cause(s) of the defects.

    5- Improve the process by eliminating the defects.


    And you have to know that those steps are also important steps toward attaining ISO 9000 certification, and notice that you can use SPC(Statistical process control) and the control charts on step [4] and step [5] above.

    Other than that i have just read the following interesting important paper about SPC(Statistical process control) that explains all the process of SPC(Statistical process control), so i invite you to read it
    carefully:

    https://owic.oregonstate.edu/sites/default/files/pubs/EM8733.pdf

    So as you notice in the above paper that the central limit theorem
    in mathematics is so important, but notice carefully that the necessary and important condition so that the central limit theorem works is that you have to use independent and random variables, and notice in the above paper that you have to do two things
    and it's that you have to reduce or eliminate the defects and you have to control the "variability" of the defects, and this is why the paper is talking about how to construct a control chart. Other than that the central limit theorem is not only related
    to SPC(Statistical process control), but it is also related to PERT and my PERT++ software project below, and notice that in my software project below that is called PERT++, i have provided you with two ways of how to estimate the critical path, first,
    by the way of CPM(Critical Path Method) that shows all the arcs of the estimate of the critical path, and the second way is by the way of the central limit theorem by using the inverse normal distribution function, and you have to provide my software
    project that is called PERT++ with three types of estimates that are the following:

    Optimistic time - generally the shortest time in which the activity
    can be completed. It is common practice to specify optimistic times
    to be three standard deviations from the mean so that there is
    approximately a 1% chance that the activity will be completed within
    the optimistic time.

    Most likely time - the completion time having the highest
    probability. Note that this time is different from the expected time.

    Pessimistic time - the longest time that an activity might require. Three standard deviations from the mean is commonly used for the pessimistic time.

    And you can download my PERT++ from reading my following below thoughts:

    More of my philosophy about the central limit theorem and about my PERT++ and more..

    The central limit theorem states that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough.

    How large is "large enough"?

    In practice, some statisticians say that a sample size of 30 is large enough when the population distribution is roughly bell-shaped. Others recommend a sample size of at least 40. But if the original population is distinctly not normal (e.g., is badly
    skewed, has multiple peaks, and/or has outliers), researchers like the sample size to be even larger. So i invite you to read my following thoughts about my software
    project that is called PERT++, and notice that the PERT networks are referred to by some researchers as "probabilistic activity networks" (PAN) because the duration of some or all of the arcs are independent random variables with known probability
    distribution functions, and have finite ranges. So PERT uses the central limit theorem (CLT) to find the expected project duration.

    And as you are noticing this Central Limit Theorem is also so important
    for quality control, read the following to notice it(I also understood Statistical Process Control (SPC)):

    An Introduction to Statistical Process Control (SPC)

    https://www.engineering.com/AdvancedManufacturing/ArticleID/19494/An-Introduction-to-Statistical-Process-Control-SPC.aspx

    Also PERT networks are referred to by some researchers as "probabilistic activity networks" (PAN) because the duration of some or all of the arcs are independent random variables with known probability distribution functions, and have finite ranges. So
    PERT uses the central limit theorem (CLT) to find the expected project duration.

    So, i have designed and implemented my PERT++ that that is important for quality, please read about it and download it from my website here:

    https://sites.google.com/site/scalable68/pert-an-enhanced-edition-of-the-program-or-project-evaluation-and-review-technique-that-includes-statistical-pert-in-delphi-and-freepascal

    ---


    So I have provided you in my PERT++ with the following functions:


    function NormalDistA (const Mean, StdDev, AVal, BVal: Extended): Single;

    function NormalDistP (const Mean, StdDev, AVal: Extended): Single;

    function InvNormalDist(const Mean, StdDev, PVal: Extended; const Less: Boolean): Extended;

    For NormalDistA() or NormalDistP(), you pass the best estimate of completion time to Mean, and you pass the critical path standard deviation to StdDev, and you will get the probability of the value Aval or the probability between the values of Aval and
    Bval.

    For InvNormalDist(), you pass the best estimate of completion time to Mean, and you pass the critical path standard deviation to StdDev, and you will get the length of the critical path of the probability PVal, and when Less is TRUE, you will obtain a
    cumulative distribution.


    So as you are noticing from my above thoughts that since PERT networks are referred to by some researchers as "probabilistic activity networks" (PAN) becaus