• More of my philosophy about the genetic algorithm and about evolutionar

    From Amine Moulay Ramdane@21:1/5 to All on Fri Feb 17 10:24:31 2023
    Hello,



    More of my philosophy about the genetic algorithm and about evolutionary design and about capacity planning and about technology and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    So I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ, so you can read my thoughts of my philosophy about the genetic algorithm and about non-linear regression and
    about logic etc. in the following web link:

    https://groups.google.com/g/alt.culture.morocco/c/YwW7mgTaJOw


    And you can read about my philosophy here:

    https://groups.google.com/g/alt.culture.morocco/c/mhkIKzSMbug


    And now more of my philosophy about evolutionary design and about capacity planning and more of my thoughts..


    The enabling practices of continuous integration, testing, and refactoring, provide a new environment that makes evolutionary design plausible, so i invite you to read the following paper so that to understand more about it:

    Is Design Dead?

    Read more here:

    https://martinfowler.com/articles/designDead.html

    And read my following thoughts about Evolutionary Design methodology so that to understand:

    And I invite you to look at step 4 of my below thoughts of software Evolutionary Design methodology with agile, here it is:

    4- When in agile a team breaks a project into phases, it’s called
    incremental development. An incremental process is one in which
    software is built and delivered in pieces. Each piece, or increment,
    represents a complete subset of functionality. The increment may be
    either small or large, perhaps ranging from just a system’s login
    screen on the small end to a highly flexible set of data management
    screens. Each increment is fully coded Sprints, Planning, and
    Retrospectives.

    And you will notice that it has to be done by "prioritizing" the pieces of the software to be delivered to the customers, and here again in agile you are noticing that we are also delivering prototypes of the software, since we often associate prototypes
    with nearly completed or just-before launch versions of products. However, designers create prototypes at all phases of the design process at various resolutions. In engineering, students are taught to and practitioners think deeply before setting out to
    build. However, as the product or system becomes increasingly complex, it becomes increasingly difficult to consider all factors while designing. Facing this reality, designers are no longer just "thinking to build" but also "building to think." By
    getting hands on and trying to create prototypes, unforeseen issues are highlighted early, saving costs related with late stage design changes. This rapid iterative cycle of thinking and building is what allows designers to learn rapidly from doing.
    Creating interfaces often benefit from the "build to think" approach. For example, in trying to layout the automotive cockpit, one can simply list all the features, buttons, and knobs that must be incorporated. However, by prototyping the cabin does one
    really start to think about how the layout should be to the driver in order to avoid confusion while maximizing comfort. This then allows the designer iterate on their initial concept to develop something that is more intuitive and refined. Also
    prototypes and there demonstrations are designed to get potential customers interested and excited.

    More of my philosophy about the Evolutionary Design methodology and more..

    Here are some important steps of software Evolutionary Design methodology:

    1- Taking a little extra time during the project to write solid code and
    fix problems today, they create a codebase that’s easy to maintain
    tomorrow.

    2- And the most destructive thing you can do to your project is to build
    new code, and then build more code that depends on it, and then still
    more code that depends on that, leading to that painfully familiar
    domino effect of cascading changes...and eventually leaving you with
    an unmaintainable mess of spaghetti code. So when teams write code,
    they can keep their software designs simple by creating software
    designs based on small, self-contained units (like classes, modules,
    services, etc.) that do only one thing; this helps avoid the domino
    effect.

    3- Instead of creating one big design at the beginning of the project
    that covers all of the requirements, agile architects use incremental
    design, which involves techniques that allow them to design a system
    that is not just complete, but also easy for the team to modify as
    the project changes.

    4- When in agile a team breaks a project into phases, it’s called
    incremental development. An incremental process is one in which
    software is built and delivered in pieces. Each piece, or increment,
    represents a complete subset of functionality. The increment may be
    either small or large, perhaps ranging from just a system’s login
    screen on the small end to a highly flexible set of data management
    screens. Each increment is fully coded Sprints, Planning, and
    Retrospectives.

    5- And an iterative process in agile is one that makes progress through
    successive refinement. A development team takes a first cut
    at a system, knowing it is incomplete or weak in some (perhaps many)
    areas. They then iteratively refine those areas until the product is
    satisfactory. With each iteration the software is improved through
    the addition of greater detail.

    More of philosophy about Democracy and the Evolutionary Design methodology..

    I will make a logical analogy between software projects and Democracy,
    first i will say that because of the today big complexity of software
    projects, so the "requirements" of those complex software projects are
    not clear and a lot could change in them, so this is
    why we are using an Evolutionary Design methodology with different tools
    such as Unit Testing, Test Driven Development, Design Patterns,
    Continuous Integration, Domain Driven Design, but we have to notice
    carefully that an important thing in Evolutionary Design methodology is
    that when those complex software projects grow, we have first to
    normalize there growth by ensuring that the complex software projects
    grow "nicely" and "balanced" by using standards, and second we have to
    optimize growth of the complex software projects by balancing between
    the criteria of the easy to change the complex software projects and the performance of the complex software projects, and third you have to
    maximize the growth of the complex software projects by making the most
    out of each optimization, and i think that by logical analogy we can
    notice that in Democracy we have also to normalize the growth by not
    allowing "extremism" or extremist ideologies that hurt Democracy, and we
    have also to optimize Democracy by for example well balancing between "performance" of the society and in the Democracy and the "reliability"
    of helping others like the weakest members of the society among the
    people that of course respect the laws.

    More of my philosophy about the problem with capacity planning of a website and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i have just invented a new methodology
    that simplifies a lot capacity planning of a website that can be of a three-tier architecture with the web servers and with the applications servers and with the database servers, but i have to explain more so that you understand the big problem with capacity planning of a website, so when you want to for example to use
    web testing, the problem is
    how to choose for example the correct distribution of the read and write and delete transactions on the database of a website ? so if it is not
    realistic you can go beyond the knee of the curve and get a not acceptable waiting time, and the Mean value analysis (MVA) algorithm has
    the same problem, so how to solve the problem ? so as you are noticing
    it is why i have come with my new methodology that uses mathematics that solves the problem. And read my previous thoughts:


    More of my philosophy about website capacity planning and about Quality of service and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, so i think that you have to lower to a certain level the QoS (quality of service) of a website, since you have to fix the limit of the number of
    connections that we allow to the website so that to not go beyond the knee of the curve, and of course i will soon show you my mathematical calculations of my new methodology of how to do capacity planning of a website, and of course you have to know
    that that we have to do capacity planning using mathematics so that to know the average waiting time etc. and this
    permits us to calculate the number of connections that we allow to the website.

    More of my philosophy about the Mean value analysis (MVA) algorithm and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i have just read the following paper
    about the Mean value analysis (MVA) algorithm, and i invite you to read it carefully:

    https://www.cs.ucr.edu/~mart/204/MVA.pdf


    But i say that i am understanding easily the above paper of Mean value analysis (MVA) algorithm, but i say that the above paper doesn't say that since you have to empirically collect the visit ratio and and the average demand of each class, so it is not
    so practical, since i say that you can and you have for example calculate the "tendency" by also for example rendering the not memoryless service of for example the database to a memoryless service, but don't worry since i will soon make you understand
    my powerful methodology with all the mathematical calculations that easy for you the job and that makes it much more practical.

    More of my philosophy about formal methods and about Leslie Lamport and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and I have just looked at the following video about the man who revolutionized computer science with math, and i invite you to look at it:

    https://www.youtube.com/watch?v=rkZzg7Vowao

    So i say that in mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. And Leslie Lamport the known scientist is saying in the above video the following: "An algorithm without a proof is
    conjecture, and if you are proving things, that means using mathematics.", so then i think that Leslie Lamport the known scientist is not thinking correctly by saying so, since i think that you can also prove an algorithm by highering much more the
    probability of the success of the proof without using mathematics to prove the algorithm, and i say that a proof has not to be just a conclusion as a boolean logic of true or false, since i think that a proof can be a conclusion in fuzzy logic and by
    logical analogy it looks like how race detectors in the very agressive mode don't detect all the data races, so then they miss a really small number of real races , so it is like a very high probability of really detecting real races, so read my below
    thoughts about it so that yo understand my views. And i think that the second mistake of Leslie Lamport the known scientist is that he is wanting us to use formal methods, but read the following interesting article below about why don't people use formal
    methods:

    And I invite you to read the following new article of the known computer expert in the above video called Leslie Lamport , and that says programmers need to use math by using formal methods, and how Lamport discuss some of his work, such as the TLA+
    specification language (developed by Lamport over the past few decades, the TLA+ [Temporal Logic of Actions] specification language allows engineers to describe objectives of a program in a precise and mathematical way), and also cited some of the
    reasons why he gives a prominent place to mathematics in programming.

    Read more in the following article and you have to translate it from french to english:

    https://www.developpez.com/actu/333640/Un-expert-en-informatique-declare-que-les-programmeurs-ont-besoin-de-plus-de-mathematiques-ajoutant-que-les-ecoles-devraient-repenser-la-facon-dont-elles-enseignent-l-informatique/

    But to answer the above expert called Leslie Lamport, i invite you to carefully read in the following interesting web page about the why don't people use formal methods:

    WHY DON'T PEOPLE USE FORMAL METHODS?

    https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/


    More of my philosophy of the polynomial-time complexity of race detection and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have quickly understood how Rust
    detects race conditions, but i think that a slew of
    “partial order”-based methods have been proposed, whose
    goal is to predict data races in polynomial time, but at the
    cost of being incomplete and failing to detect data races in
    "some" traces. These include algorithms based on the classical
    happens-before partial order, and those based
    on newer partial orders that improve the prediction of data
    races over happens-before , so i think that we have to be optimistic
    since read the following web page about the Sanitizers:

    https://github.com/google/sanitizers

    And notice carefully the ThreadSanitizer, so read carefully
    the following paper about ThreadSanitizer:

    https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35604.pdf


    And it says in the conclusion the following:

    "ThreadSanitizer uses a new algorithm; it has several modes of operation, ranging from the most conservative mode (which has few false positives but also misses real races) to a very aggressive one (which
    has more false positives but detects the largest number of
    real races)."

    So as you are noticing since the very agressive mode doesn't detect
    all the data races, so it misses a really small number of real races , so it is like a very high probability of really detecting real races ,
    and i think that you can also use my below methodology of using incrementally a model from the source code and using Spin model checker so that to higher even more the probability of detecting real races.


    Read my previous thoughts:

    More of my philosophy about race conditions and about composability and more of my thoughts..

    I say that a model is a representation of something. It captures not all attributes of the represented thing, but rather only those seeming relevant. So my way of doing in software development in Delphi and Freepascal is also that i am using a "model"
    from the source code that i am executing in Spin model checking so that to detect race conditions, so i invite you to take a look at the following new tutorial that uses the powerful Spin tool:

    https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html

    So you can for example install Spin model checker so that to detect race conditions, this is how you will get much more professional at detecting deadlocks and race conditions in parallel programming. And i invite you to look at the following video so
    that to know how to install Spin model checker on windows:

    https://www.youtube.com/watch?v=MGzmtWi4Oq0

    More of my philosophy about race detection and concurrency and more..

    I have just looked quickly at different race detectors, and i think that
    the Intel Thread Checker from Intel company from "USA" is also very good since the Intel Thread Checker needs to instrument either the C++ source code or the compiled binary to make every memory reference and every standard Win32 synchronization
    primitive observable, so this instrumentation from the source code is very good since it also permits me to port my scalable algorithms inventions by for example wrapping them in some native Windows synchronization APIs, and this instrumentation from the
    source code is also business friendly, so read about different race detectors and about Intel Thread Checker here:

    https://docs.microsoft.com/en-us/archive/msdn-magazine/2008/june/tools-and-techniques-to-identify-concurrency-issues

    So i think that the other race detectors of other programming languages have to provide this instrumentation from the source code as Intel Thread Checker from Intel company from "USA".

    More of my philosophy about Rust and about memory models and about technology and more of my thoughts..


    I think i am highly smart, and i say that the new programming language that we call Rust has an important problem, since read the following interesting article that says that atomic operations that have not correct memory ordering can still cause race
    conditions in safe code, this is why the suggestion made by the researchers is:

    "Race detection techniques are needed for Rust, and they should focus on unsafe code and atomic operations in safe code."


    Read more here:

    https://www.i-programmer.info/news/98-languages/12552-is-rust-really-safe.html


    More of my philosophy about programming languages about lock-based systems and more..

    I think we have to be optimistic about lock-based systems, since race conditions detection can be done in polynomial-time, and you can notice it by reading the following paper:

    https://arxiv.org/pdf/1901.08857.pdf

    Or by reading the following paper:

    https://books.google.ca/books?id=f5BXl6nRgAkC&pg=PA421&lpg=PA421&dq=race+condition+detection+and+polynomial+complexity&source=bl&ots=IvxkORGkQ9&sig=ACfU3U2x0fDnNLHP1Cjk5bD_fdJkmjZQsQ&hl=en&sa=X&ved=2ahUKEwjKoNvg0MP0AhWioXIEHRQsDJc4ChDoAXoECAwQAw#v=
    onepage&q=race%20condition%20detection%20and%20polynomial%20complexity&f=false

    So i think we can continu to program in lock-based systems, and about composability of lock-based systems, read my following thoughts about it it:

    More of my philosophy about composability and about Haskell functional language and more..

    I have just read quickly the following article about composability,
    so i invite you to read it carefully:

    https://bartoszmilewski.com/2014/06/09/the-functional-revolution-in-c/

    I am not in accordance with the above article, and i think that the above scientist is programming in Haskell functional language and it is for him the way to composability, since he says that the way of functional programming like Haskell functional
    programming is the the way that allows composability in presence of concurrency, but for him lock-based systems don't allow it, but i don't agree with him, and i will give you the logical proof of it, and here it is, read what is saying an article from
    ACM that was written by both Bryan M. Cantrill and Jeff Bonwick from Sun Microsystems:

    You can read about Bryan M. Cantrill here:

    https://en.wikipedia.org/wiki/Bryan_Cantrill

    And you can read about Jeff Bonwick here:

    https://en.wikipedia.org/wiki/Jeff_Bonwick

    And here is what says the article about composability in the presence of concurrency of lock-based systems:

    "Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:

    “Locks and condition variables do not support modular programming,” reads one typically brazen claim, “building large programs by gluing together smaller programs[:] locks make this impossible.”9 The claim, of course, is incorrect. For evidence
    one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.

    There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to
    user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between
    software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.

    Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be
    no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the
    subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is
    sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized."

    Read more here:

    https://queue.acm.org/detail.cfm?id=1454462



    Thank you,
    Amine Moulay Ramdane.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)