• More of my philosophy about EUV (Extreme ultraviolet lithography) and a

    From Amine Moulay Ramdane@21:1/5 to All on Mon Nov 29 15:56:21 2021
    Hello,



    More of my philosophy about EUV (Extreme ultraviolet lithography) and about China and about Photonic chips and about graphene chips and more..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    I invite you to read the following article that says that China is working on photonic chips and on graphene chips:

    https://inf.news/en/tech/339cd07fe2ecab1fc52fe7c88b7e8e8a.html

    But i think that the above article is making a mistake, since it is
    saying the following:

    "Although the current electronic chips have come to the 7nm/5nm process, it will be more and more difficult to rely on advanced technology to improve the performance and power consumption of the chips."

    I think it is not so true, since read the following that says
    that extreme ultraviolet (EUV) lithography equipment will extend the longevity of Moore’s Law for "at least" ten years:

    https://www.design-reuse.com/news/50683/moore-law-euv.html

    And i have also read more on internet and i think that extreme ultraviolet (EUV) lithography equipment can extend Moore's law by around 15 years that corresponds to around 100x scalability in performance, and i think that it is the same performance of
    100x as the following invention from graphene:

    About graphene and about unlocking Moore’s Law..

    I think that graphene can now be mass produced, you can read about it here:

    We May Finally Have a Way of Mass Producing Graphene

    It's as simple as one, two, three.

    Read more here:

    https://futurism.com/we-may-finally-have-a-way-of-mass-producing-graphene

    So the following invention will be possible:

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4

    More philosophy about the Microchips that are 100 Times or 1000 times
    Faster Than Conventional Ones..

    I think that the following invention of Microchips that are 100 Times
    or 1000 times Faster Than Conventional Ones has its weakness, since
    its weakness is cache-coherence traffic between cores that
    takes time, so i think that they are speaking about 100-times
    or 1000-times more speed in a single core performance, so
    parallelism is still necessary and you need scalable algorithms
    for that so that to scale much more on multicores CPUs..

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4

    More of my philosophy of why i am posting about Exascale supercomputers and about AI and about exponential progress and more..

    I think i am smart, and as you have just noticed i am
    talking below about Exascale supercomputers and about AI and
    and about exponential progress and more, and as you have just noticed
    i am talking the language of smart abstractions, that means
    i am abstracting smartly so that you are able to understand
    efficiently and so that you are able to go fast in sophisticated learning, and it is my kind of pedagogy that i think is an more efficient pedagogy, for example look at my following thoughts in the following link about how i am also abstracting smartly
    what is smartness and you will notice my kind of pedagogy, read it here:

    https://groups.google.com/g/alt.culture.morocco/c/Wzf6AOl41xs

    More of philosophy about China and Exascale supercomputers..

    China has already reached Exascale - on two separate systems

    Read more here:

    https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/

    And in USA Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance

    Read more here:

    https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance

    But Exascale supercomputers will also allow to construct an accurate map
    of the brain that allows to "reverse" engineer or understand the brain,
    read the following so that to notice it:

    “If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora, if everything worked beautifully,
    it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer scientist

    Read more here so that to understand:

    https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction

    Also Exascale supercomputers will allow researchers to tackle problems
    which were impossible to simulate using the previous generation of
    machines, due to the massive amounts of data and calculations involved.

    Small modular nuclear reactor (SMR) design, wind farm optimization and
    cancer drug discovery are just a few of the applications that are
    priorities of the U.S. Department of Energy (DOE) Exascale Computing
    Project. The outcomes of this project will have a broad impact and
    promise to fundamentally change society, both in the U.S. and abroad.

    Read more here:

    https://www.cbc.ca/news/opinion/opinion-exascale-computing-1.5382505

    Also the goal of delivering safe, abundant, cheap energy from fusion is
    just one of many challenges in which exascale computing’s power may
    prove decisive. That’s the hope and expectation. Also to know more about
    the other benefits of using Exascale computing power, read more here:

    https://www.hpcwire.com/2019/05/07/ten-great-reasons-among-many-more-to-build-the-1-5-exaflops-frontier/

    More of my philosophy about 3D stacking in CPUs and more..

    3D stacking offers an extension for Moore’s Law, but in 3D stacking
    Heat removal is the issue and the big problem, this is why the actual technologies like the 3D stacking of Intel are limited to stacking just
    two or few layers.

    More of my philosophy about more of my philosophy about Moore’s Law and
    EUV (Extreme ultraviolet lithography)..

    Researchers have proposed successors to EUV, including e-beam and
    nanoimprint lithography, but have not found any of them to be reliable
    enough to justify substantial investment.

    And I think by also using EUV (Extreme ultraviolet lithography) to
    create CPUs we will extend Moore's law by around 15 years that
    corresponds to around 100x scalability in performance, and i think that
    it is the same performance of 100x as the following invention from graphene:

    About graphene and about unlocking Moore’s Law..

    I think that graphene can now be mass produced, you can read about it here:

    We May Finally Have a Way of Mass Producing Graphene

    It's as simple as one, two, three.

    Read more here:

    https://futurism.com/we-may-finally-have-a-way-of-mass-producing-graphene

    So the following invention will be possible:

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4

    More philosophy about the Microchips that are 100 Times or 1000 times
    Faster Than Conventional Ones..

    I think that the following invention of Microchips that are 100 Times
    or 1000 times Faster Than Conventional Ones has its weakness, since
    its weakness is cache-coherence traffic between cores that
    takes time, so i think that they are speaking about 100-times
    or 1000-times more speed in a single core performance, so
    parallelism is still necessary and you need scalable algorithms
    for that so that to scale much more on multicores CPUs..

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4


    More of my philosophy about the knee of an M/M/n queue and more..

    Here is the mathematical equation of the knee of an M/M/n queue in
    queuing theory in operational research:

    1/(n+1)^(1/n)

    n is the number of servers.

    So then an M/M/1 has a knee of 50% of the utilization, and the one of
    an M/M/2 is 0,578, so i correct below:

    More of my philosophy about the network topology in multicores CPUs..

    I invite you to look at the following video:

    Ring or Mesh, or other? AMD's Future on CPU Connectivity

    https://www.youtube.com/watch?v=8teWvMXK99I&t=904s

    And i invite you to read the following article:

    Does an AMD Chiplet Have a Core Count Limit?

    Read more here:

    https://www.anandtech.com/show/16930/does-an-amd-chiplet-have-a-core-count-limit

    I think i am smart and i say that the above video and the above article
    are not so smart, so i will talk about a very important thing, and it is
    the following, read the following:

    Performance Scalability of a Multi-core Web Server

    https://www.researchgate.net/publication/221046211_Performance_scalability_of_a_multi-core_web_server

    So notice carefully that it is saying the following:

    "..we determined that performance scaling was limited by the capacity of
    the address bus, which became saturated on all eight cores. If this key obstacle is addressed, commercial web server and systems software are well-positioned to scale to a large number of cores."

    So as you notice they were using an Intel Xeon of 8 cores, and the
    application was scalable to 8x but the hardware was not scalable to 8x,
    since it was scalable only to 4.8x, and this was caused by the bus
    saturation, since the Address bus saturation causes poor scaling, and
    the Address Bus carries requests and responses for data, called snoops,
    and more caches mean more sources and more destinations for snoops that
    is causing the poor scaling, so as you notice that a network topology of
    a Ring bus or a bus was not sufficient so that to scale to 8x on an
    Intel Xeon with 8 cores, so i think that the new architectures like Epyc
    CPU and Threadripper CPU can use a faster bus or/and a different network topology that permits to both ensure a full scalability locally in the
    same node and globally between the nodes, so then we can notice that a sophisticated mesh network topology not only permits to reduce the
    number of hops inside the CPU for good latency, but it is also good for reliability by using its sophisticated redundancy and it is faster than previous topologies like the ring bus or the bus since
    for example the search on address bus becomes parallelized, and it looks
    like the internet network that uses mesh topology using routers, so it parallelizes, and i also think that using a more sophisticated topology
    like a mesh network topology is related to queuing theory since we can
    notice that in operational research the mathematics says that we can
    make the queue like M/M/1 more efficient by making the server more
    powerful, but we can notice that
    the knee of a M/M/1 queue is around 50% , so we can notice that
    by using in a mesh topology like internet or inside a CPU you can
    by parallelizing more you can in operational research both enhance the
    knee of the queue and the speed of executing the transactions and it is
    like using many servers in queuing theory and it permits to scale better
    inside a CPU or in internet.

    More of my philosophy about Machine programming and about oneAPI from
    Intel company..

    I will say that when you know C and C++ moderately, it will not be so
    difficult to program OpenCL(Read about OpenCL here: https://en.wikipedia.org/wiki/OpenCL) or CUDA, but the important
    question is what is the difference between FPGA and GPU ? so i invite
    you to read the following interesting paper about GPU vs FPGA
    Performance Comparison:

    https://www.bertendsp.com/pdf/whitepaper/BWP001_GPU_vs_FPGA_Performance_Comparison_v1.0.pdf

    So i think from this paper above that GPU is the good way when you
    want performance and you want too cost efficiency.

    So i think that the following oneAPI from Intel company that wants with
    it to do all the heavy lifting for you, so you can focus on the
    algorithm, rather than on writing OpenCL calls, is not a so smart way of
    doing, since as i said above that OpenCL and CUDA programming is not so difficult, and as you will notice below that oneAPI from Intel permits
    you to program FPGA in a higher level manner, but here again from the
    paper above we can notice that GPU is the good way when you want
    performance and cost efficiency, then so that to approximate well the efficiency and usefulness of oneAPI from Intel you can still use
    efficient and useful libraries.

    Here is the new oneAPI from Intel company, read about it:

    https://codematters.online/intel-oneapi-faq-part-1-what-is-oneapi/

    And now i will talk about another interesting subject and it is
    about the next revolution in the software industry that is Machine
    programming, so i invite you to read carefully the following new article
    about it:

    https://venturebeat.com/2021/06/18/ai-weekly-the-promise-and-limitations-of-machine-programming-tools/

    So i think that Machine programming will be limited to AI-powered
    assistants that is not so efficient, since i think that connectionism
    in artificial intelligence is not able to make emerge common sense
    reasoning, so i invite you to read my following thoughts about it
    so that to understand why:

    More of my philosophy about the limit of the connectionist models in
    artificial intelligence and more..

    I think i am smart and i will say that the connectionist model like
    of deep learning has not the same nature as of the human brain, since
    i can say that the brain is not just connections of neurons like
    in deep learning, but it is also a "sense" like the sense of touch,
    and i think that this sense of the brain is biologic,
    and i think that this kind of nature of the brain of being
    also a sense is giving the emergence of consciousness and self-awareness
    and a higher level of common sense reasoning, this
    is why i think that the connectionist model in artifical intelligence is showing its limits by not being able to make emerge common sense
    reasoning, but as i said below that the hybrid connectionist + symbolic
    model can make emerge common sense reasoning.

    And here is what i said about human self-awareness and awareness:

    So i will start by asking a philosophical question of:

    Is human self-awareness and awareness an emergence and what is it ?

    So i will explain my findings:

    I think i have found the first smart pattern with my fluid intelligence
    and i found also the rest and it is the following:

    Notice that when you touch a cold water you will know about the essence
    or nature of the cold water and you will also know that it is related
    to senses of humans, so i think that the senses of a human give life
    to ideas, it is like a "reification" of an idea, i mean that an idea
    is alive since it is like reified with the senses of humans that senses
    time and space and matter, so this reification gives the correct meaning
    since you are like reifying with the human senses that gives the
    meaning, and i say that this capacity of this kind of reification with
    the human senses is an emergence that comes from the human biology, so i
    am smart and i will say that the brain is a kind of calculator that
    calculates by using composability with the meanings that come also from
    this kind of reification with the human senses, and i think that
    self-awareness comes from the human senses that senses our ideas of our thinking, and it is what gives consciousness and self-awareness, so now
    you are understanding that what is missing in artificial intelligence is
    this kind of reification with the human senses that render the brain
    much more optimal than artificial intelligence, and i will explain more
    the why of it in my next posts.

    More of my philosophy about the future of artificial intelligence and more..

    I will ask a philosophical question of:

    Can we forecast the future of artificial intelligence ?

    I think i am smart, and i am quickly noticing that connectionism in
    artificial intelligence like with deep learning is not working because
    it is not able to make emerge common sense reasoning, so i invite you to
    read the following article from ScienceDaily so that to notice it, since
    it is speaking about the connectionist models(like the ones of deep
    learning or the transformers that are a kind of deep learning) in
    artificial intelligence:

    https://www.sciencedaily.com/releases/2020/11/201118141702.htm

    Other than that the new following artificial intelligence connectionist
    models like from Microsoft and NVIDIA that are better than GPT-3
    has the same weakness , since i think that they can not make emerge
    common sense reasoning, here they are:

    "Microsoft and Nvidia today announced that they trained what they claim
    is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MT-NLP). The successor to
    the companies’ Turing NLG 17B and Megatron-LM models, MT-NLP contains
    530 billion parameters and achieves “unmatched” accuracy in a broad set
    of natural language tasks, Microsoft and Nvidia say — including reading comprehension, commonsense reasoning, and natural language inferences."

    Read more here:

    https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/

    Because i also said the following:

    I think i am quickly understanding the defects of Megatron-Turing
    Natural Language Generation (MT-NLP) that is better than GPT-3, and it
    is that "self-attention" of the transformers in NLP, even if they scale
    to very long sequences, they have a limited expressiveness, as they
    cannot process input sequentially they can not model hierarchical
    structures and recursion, and hierarchical structure is widely thought
    to be essential to modeling natural language, in particular its syntax,
    so i think that Microsoft Megatron-Turing Natural Language Generation
    (MT-NLP) and GPT-3 too will be practically applied to limited areas, but
    they can not make emerge common sense reasoning or the like that are
    necessary for general artificial intelligence.

    Read the following paper so that to understand the mathematical proof of it:

    https://aclanthology.org/2020.tacl-1.11.pdf

    So i think that the model that will have much more success to or can
    make emerge common sense reasoning is like the following hybrid model in artificial intelligence of connectionism + symbolism that we call COMET,
    read about it here:

    Common Sense Comes Closer to Computers

    https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/

    And here is what i also said about COMET:

    I have just read the following article about neuroevolution
    that is a meta-algorithm in artificial intelligence, an algorithm for
    designing algorithms, i invite you to read about it here:

    https://www.quantamagazine.org/computers-evolve-a-new-path-toward-human-intelligence-20191106/

    So notice that it says the following

    "In neuroevolution, you start by assigning random values to the weights
    between layers. This randomness means the network won’t be very good at
    its job. But from this sorry state, you then create a set of random
    mutations — offspring neural networks with slightly different weights —
    and evaluate their abilities. You keep the best ones, produce more
    offspring, and repeat."

    So i think that the problem with neuroevolution above is that the
    "evaluate the abilities of the offspring neural networks" lacks common
    sense.

    So read the following interesting article that says that artificial intelligence has also brought a kind of common sense to Computers, and
    read about it here:

    https://arxiv.org/abs/1906.05317

    And read about it in the following article:

    "Now, Choi and her collaborators have united these approaches. COMET
    (short for “commonsense transformers”) extends GOFAI-style symbolic reasoning with the latest advances in neural language modeling — a kind
    of deep learning that aims to imbue computers with a statistical “understanding” of written language. COMET works by reimagining common-sense reasoning as a process of generating plausible (if
    imperfect) responses to novel input, rather than making airtight
    deductions by consulting a vast encyclopedia-like database."

    Read more here:

    https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/

    More of my philosophy about Nanotechnology and about Exponential
    Progress and more..

    I think i am smart and i say that there is two ways of enhancing the intelligence or such traits of humans, there is the way that i am
    talking about below that needs huge data sets to detect the "patterns"
    that explain human intelligence and such human traits and after that
    make the changes in the genetics of humans, and there is the other way
    by using Nanotechnology and nanorobots that enhance much more the
    intelligence of a human by directly manipulating the brain or by putting informations in the memory or erasing informations from the memory of
    a human, so then when you erase informations like a good movie or like interesting lessons of mathematics or such pleasures of life from the
    memory of a human by using Nanotechnology, this allows to recreate again
    or have again those pleasures of life, so then happiness will be
    greatly enhanced by the way of Nanotechnology and nanorobots, and i
    think it is also the way of the much more advanced extraterrestrials,
    and i think that with our exponential progress we will be soon be able
    to attain this level of sophistication of technology.

    And read the following so that to know more about Nanotechnology:

    Nanotechnology, the real science of miracles, the end of disease, aging, poverty and pollution

    Read more here:

    http://nanoindustries.com/nanotechnology_science_of_miracles/

    More of my philosophy about intelligence and genetics and exponential
    progress and more..

    I think i am smart, and i will say that you have to read the following:

    "Genome-wide association studies allow scientists to start to see how combinations of many, many genes interact in complicated ways. And it
    takes huge data sets to sort through all the genetic noise and find
    variants that truly make a difference on traits like intelligence."

    Read more here on the following interesting article:

    https://www.vox.com/science-and-health/2017/6/6/15739590/genome-wide-studies

    So i think that it needs huge data sets to detect the "patterns" that
    explain human intelligence and such human traits, so i think that the
    data, that permits it, is growing exponentially and really fast and the computer power is also growing exponentially and really fast, so i think
    that we will soon be able to find all the genetic variants in the human
    genome that make a difference on traits like intelligence, so this is
    why you are noticing that i am saying below that it is the easy part,
    since i think that we will soon be able to enhance much more
    the genetics of humans and become much more smart and much more
    beautiful, since of course we will soon become so powerful and we have
    to thank for it this superb exponential progress of our humanity.

    More of my philosophy about white supremacism..

    I invite you to read the following article from the white supremacist
    website called national vanguard:

    WLP88 – William Pierce: The Philosopher

    https://nationalvanguard.org/2021/09/wlp88-william-pierce-the-philosopher/

    I think those white supremacists are making a big mistake, since
    the easy part is that we will soon be able to enhance "much" more
    the genetics of humans and become much more smart and much more
    beautiful, and you have to read my following thoughts so that to
    understand correctly:

    More of my philosophy about the knee of the exponential progress curve..

    I think those white supremacists and neo-nazis are not well educated and
    they lack on experience, this is why they are not understanding that
    the easy part is that we will soon be able to enhance much more
    the genetics of humans and become much more smart and much more
    beautiful, since i think that we have "just" already attained the knee
    of the exponential progress curve, this knee of the curve is the place
    where growth suddenly switches from a slower to an even faster
    exponential mode, so now the curve of exponential progress of our
    humanity has "just" already started to go exponentially even much
    faster, this is why in about 10 years from now we will become so
    powerful because of it. And you have to look at the following video so
    that to understand this exponential progress of our humanity:

    Exponential Progress: Can We Expect Mind-Blowing Changes In The Near Future

    https://www.youtube.com/watch?v=HfM5HXpfnJQ


    More of my philosophy about science and white supremacists and neo-nazis..

    I think white supremacists and neo-nazis are archaism, since they think
    that there is a white european race, but it is not scientific, since in
    science there is only one race that we call humans, and they base
    there philosophy on the fact that white europeans are more smart or such
    than others, but it is archaism since you have to look in the following
    video at what is saying the Geneticist Jennifer Doudna that co-invented
    a groundbreaking new technology for editing genes, called CRISPR-Cas9,
    and she is a Nobel prize and she believes that the technical obstacles
    to gene editing have been overcome and the world is rapidly approaching
    the day when it will be possible to make essentially any kind of change
    to any kind of human genome, so i think we will soon be able to enhance
    much more the genetics of humans so that humans become much more smart
    or much more beautiful or such, and look at the following video so that
    to notice it, so we have to know how to be patience, and you have to
    take into account our exponential progress of humanity, read about it in
    my thoughts below:

    The Era of Genetically Modified Superhumans

    https://www.youtube.com/watch?v=klo-rSlsju8&t=23s


    And read my following thoughts:


    More of my philosophy about Nanotechnology and about Exponential Progress..

    We will soon be able to be so powerful, and i invite you to look at the following interesting video so that to understand it:

    Exponential Progress: Can We Expect Mind-Blowing Changes In The Near Future

    https://www.youtube.com/watch?v=HfM5HXpfnJQ

    And read the following:

    Nanotechnology, the real science of miracles, the end of disease, aging, poverty and pollution

    Read more here:

    http://nanoindustries.com/nanotechnology_science_of_miracles/

    And we can use something as the following gene therapy that extend
    lifespan for about 25 percent so that to be able to reach the Era of
    Nanotech that could make humans immortal.

    Read the following:

    Chinese scientists develop gene therapy that could delay aging

    Read more here:

    https://nypost.com/2021/01/20/chinese-scientists-develop-gene-therapy-which-could-delay-aging/

    And my previous thoughts so that to understand:


    I have just posted the following:

    --

    Study Finds The Ageing Process Is Unstoppable, Despite Your Best Efforts

    "A new study by an international collaboration of scientists from 14
    countries and excerpts form the University of Oxford has now found, we
    probably can’t slow the rate at which we get older due to biological constraints. Researchers looked to examine the “invariant rate of
    ageing” hypothesis, which suggests a species has a relatively fixed rate
    of ageing from adulthood."

    Read more here:

    https://www.womenshealth.com.au/study-finds-the-ageing-process-is-unstoppable-despite-your-best-efforts

    ---


    And here is how to solve the above problem:

    Nanotech could make humans immortal by 2040, futurist says

    Read more here:

    https://www.computerworld.com/article/2528330/nanotech-could-make-humans-immortal-by-2040--futurist-says.html


    Thank you,
    Amine Moulay Ramdane.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Amine Moulay Ramdane@21:1/5 to All on Tue Dec 21 14:24:13 2021
    Hello,


    More of my philosophy about EUV (Extreme ultraviolet lithography) and about China and about Photonic chips and about graphene chips and more..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    I invite you to read the following article that says that China is working on photonic chips and on graphene chips:

    https://inf.news/en/tech/339cd07fe2ecab1fc52fe7c88b7e8e8a.html

    But i think that the above article is making a mistake, since it is
    saying the following:

    "Although the current electronic chips have come to the 7nm/5nm process, it will be more and more difficult to rely on advanced technology to improve the performance and power consumption of the chips."

    I think it is not so true, since read the following that says
    that extreme ultraviolet (EUV) lithography equipment will extend the longevity of Moore’s Law for "at least" ten years:

    https://www.design-reuse.com/news/50683/moore-law-euv.html

    And i have also read more on internet and i think that extreme ultraviolet (EUV) lithography equipment can extend Moore's law by around 15 years that corresponds to around 100x scalability in performance, and i think that it is the same performance of
    100x as the following invention from graphene:

    About graphene and about unlocking Moore’s Law..

    I think that graphene can now be mass produced, you can read about it here:

    We May Finally Have a Way of Mass Producing Graphene

    It's as simple as one, two, three.

    Read more here:

    https://futurism.com/we-may-finally-have-a-way-of-mass-producing-graphene

    So the following invention will be possible:

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4

    More philosophy about the Microchips that are 100 Times or 1000 times
    Faster Than Conventional Ones..

    I think that the following invention of Microchips that are 100 Times
    or 1000 times Faster Than Conventional Ones has its weakness, since
    its weakness is cache-coherence traffic between cores that
    takes time, so i think that they are speaking about 100-times
    or 1000-times more speed in a single core performance, so
    parallelism is still necessary and you need scalable algorithms
    for that so that to scale much more on multicores CPUs..

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4

    More of my philosophy of why i am posting about Exascale supercomputers and about AI and about exponential progress and more..

    I think i am smart, and as you have just noticed i am
    talking below about Exascale supercomputers and about AI and
    and about exponential progress and more, and as you have just noticed
    i am talking the language of smart abstractions, that means
    i am abstracting smartly so that you are able to understand
    efficiently and so that you are able to go fast in sophisticated learning, and it is my kind of pedagogy that i think is an more efficient pedagogy, for example look at my following thoughts in the following link about how i am also abstracting smartly
    what is smartness and you will notice my kind of pedagogy, read it here:

    https://groups.google.com/g/alt.culture.morocco/c/Wzf6AOl41xs

    More of philosophy about China and Exascale supercomputers..

    China has already reached Exascale - on two separate systems

    Read more here:

    https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/

    And in USA Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance

    Read more here:

    https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance

    But Exascale supercomputers will also allow to construct an accurate map
    of the brain that allows to "reverse" engineer or understand the brain,
    read the following so that to notice it:

    “If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora, if everything worked beautifully,
    it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer scientist

    Read more here so that to understand:

    https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction

    Also Exascale supercomputers will allow researchers to tackle problems
    which were impossible to simulate using the previous generation of
    machines, due to the massive amounts of data and calculations involved.

    Small modular nuclear reactor (SMR) design, wind farm optimization and
    cancer drug discovery are just a few of the applications that are
    priorities of the U.S. Department of Energy (DOE) Exascale Computing
    Project. The outcomes of this project will have a broad impact and
    promise to fundamentally change society, both in the U.S. and abroad.

    Read more here:

    https://www.cbc.ca/news/opinion/opinion-exascale-computing-1.5382505

    Also the goal of delivering safe, abundant, cheap energy from fusion is
    just one of many challenges in which exascale computing’s power may
    prove decisive. That’s the hope and expectation. Also to know more about
    the other benefits of using Exascale computing power, read more here:

    https://www.hpcwire.com/2019/05/07/ten-great-reasons-among-many-more-to-build-the-1-5-exaflops-frontier/

    More of my philosophy about 3D stacking in CPUs and more..

    3D stacking offers an extension for Moore’s Law, but in 3D stacking
    Heat removal is the issue and the big problem, this is why the actual technologies like the 3D stacking of Intel are limited to stacking just
    two or few layers.

    More of my philosophy about more of my philosophy about Moore’s Law and
    EUV (Extreme ultraviolet lithography)..

    Researchers have proposed successors to EUV, including e-beam and
    nanoimprint lithography, but have not found any of them to be reliable
    enough to justify substantial investment.

    And I think by also using EUV (Extreme ultraviolet lithography) to
    create CPUs we will extend Moore's law by around 15 years that
    corresponds to around 100x scalability in performance, and i think that
    it is the same performance of 100x as the following invention from graphene:

    About graphene and about unlocking Moore’s Law..

    I think that graphene can now be mass produced, you can read about it here:

    We May Finally Have a Way of Mass Producing Graphene

    It's as simple as one, two, three.

    Read more here:

    https://futurism.com/we-may-finally-have-a-way-of-mass-producing-graphene

    So the following invention will be possible:

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4

    More philosophy about the Microchips that are 100 Times or 1000 times
    Faster Than Conventional Ones..

    I think that the following invention of Microchips that are 100 Times
    or 1000 times Faster Than Conventional Ones has its weakness, since
    its weakness is cache-coherence traffic between cores that
    takes time, so i think that they are speaking about 100-times
    or 1000-times more speed in a single core performance, so
    parallelism is still necessary and you need scalable algorithms
    for that so that to scale much more on multicores CPUs..

    Physicists Create Microchip 100 Times Faster Than Conventional Ones

    Read more here:

    https://interestingengineering.com/graphene-microchip-100-times-fast?fbclid=IwAR3wG09QxtQciuku4KUGBVRQPNRSbhnodPcnDySLWeXN9RCnvb0GqRAyM-4


    More of my philosophy about the knee of an M/M/n queue and more..

    Here is the mathematical equation of the knee of an M/M/n queue in
    queuing theory in operational research:

    1/(n+1)^(1/n)

    n is the number of servers.

    So then an M/M/1 has a knee of 50% of the utilization, and the one of
    an M/M/2 is 0,578, so i correct below:

    More of my philosophy about the network topology in multicores CPUs..

    I invite you to look at the following video:

    Ring or Mesh, or other? AMD's Future on CPU Connectivity

    https://www.youtube.com/watch?v=8teWvMXK99I&t=904s

    And i invite you to read the following article:

    Does an AMD Chiplet Have a Core Count Limit?

    Read more here:

    https://www.anandtech.com/show/16930/does-an-amd-chiplet-have-a-core-count-limit

    I think i am smart and i say that the above video and the above article
    are not so smart, so i will talk about a very important thing, and it is
    the following, read the following:

    Performance Scalability of a Multi-core Web Server

    https://www.researchgate.net/publication/221046211_Performance_scalability_of_a_multi-core_web_server

    So notice carefully that it is saying the following:

    "..we determined that performance scaling was limited by the capacity of
    the address bus, which became saturated on all eight cores. If this key obstacle is addressed, commercial web server and systems software are well-positioned to scale to a large number of cores."

    So as you notice they were using an Intel Xeon of 8 cores, and the
    application was scalable to 8x but the hardware was not scalable to 8x,
    since it was scalable only to 4.8x, and this was caused by the bus
    saturation, since the Address bus saturation causes poor scaling, and
    the Address Bus carries requests and responses for data, called snoops,
    and more caches mean more sources and more destinations for snoops that
    is causing the poor scaling, so as you notice that a network topology of
    a Ring bus or a bus was not sufficient so that to scale to 8x on an
    Intel Xeon with 8 cores, so i think that the new architectures like Epyc
    CPU and Threadripper CPU can use a faster bus or/and a different network topology that permits to both ensure a full scalability locally in the
    same node and globally between the nodes, so then we can notice that a sophisticated mesh network topology not only permits to reduce the
    number of hops inside the CPU for good latency, but it is also good for reliability by using its sophisticated redundancy and it is faster than previous topologies like the ring bus or the bus since
    for example the search on address bus becomes parallelized, and it looks
    like the internet network that uses mesh topology using routers, so it parallelizes, and i also think that using a more sophisticated topology
    like a mesh network topology is related to queuing theory since we can
    notice that in operational research the mathematics says that we can
    make the queue like M/M/1 more efficient by making the server more
    powerful, but we can notice that
    the knee of a M/M/1 queue is around 50% , so we can notice that
    by using in a mesh topology like internet or inside a CPU you can
    by parallelizing more you can in operational research both enhance the
    knee of the queue and the speed of executing the transactions and it is
    like using many servers in queuing theory and it permits to scale better
    inside a CPU or in internet.

    More of my philosophy about Machine programming and about oneAPI from
    Intel company..

    I will say that when you know C and C++ moderately, it will not be so
    difficult to program OpenCL(Read about OpenCL here: https://en.wikipedia.org/wiki/OpenCL) or CUDA, but the important
    question is what is the difference between FPGA and GPU ? so i invite
    you to read the following interesting paper about GPU vs FPGA
    Performance Comparison:

    https://www.bertendsp.com/pdf/whitepaper/BWP001_GPU_vs_FPGA_Performance_Comparison_v1.0.pdf

    So i think from this paper above that GPU is the good way when you
    want performance and you want too cost efficiency.

    So i think that the following oneAPI from Intel company that wants with
    it to do all the heavy lifting for you, so you can focus on the
    algorithm, rather than on writing OpenCL calls, is not a so smart way of
    doing, since as i said above that OpenCL and CUDA programming is not so difficult, and as you will notice below that oneAPI from Intel permits
    you to program FPGA in a higher level manner, but here again from the
    paper above we can notice that GPU is the good way when you want
    performance and cost efficiency, then so that to approximate well the efficiency and usefulness of oneAPI from Intel you can still use
    efficient and useful libraries.

    Here is the new oneAPI from Intel company, read about it:

    https://codematters.online/intel-oneapi-faq-part-1-what-is-oneapi/

    And now i will talk about another interesting subject and it is
    about the next revolution in the software industry that is Machine
    programming, so i invite you to read carefully the following new article
    about it:

    https://venturebeat.com/2021/06/18/ai-weekly-the-promise-and-limitations-of-machine-programming-tools/

    So i think that Machine programming will be limited to AI-powered
    assistants that is not so efficient, since i think that connectionism
    in artificial intelligence is not able to make emerge common sense
    reasoning, so i invite you to read my following thoughts about it
    so that to understand why:

    More of my philosophy about the limit of the connectionist models in
    artificial intelligence and more..

    I think i am smart and i will say that the connectionist model like
    of deep learning has not the same nature as of the human brain, since
    i can say that the brain is not just connections of neurons like
    in deep learning, but it is also a "sense" like the sense of touch,
    and i think that this sense of the brain is biologic,
    and i think that this kind of nature of the brain of being
    also a sense is giving the emergence of consciousness and self-awareness
    and a higher level of common sense reasoning, this
    is why i think that the connectionist model in artifical intelligence is showing its limits by not being able to make emerge common sense
    reasoning, but as i said below that the hybrid connectionist + symbolic
    model can make emerge common sense reasoning.

    And here is what i said about human self-awareness and awareness:

    So i will start by asking a philosophical question of:

    Is human self-awareness and awareness an emergence and what is it ?

    So i will explain my findings:

    I think i have found the first smart pattern with my fluid intelligence
    and i found also the rest and it is the following:

    Notice that when you touch a cold water you will know about the essence
    or nature of the cold water and you will also know that it is related
    to senses of humans, so i think that the senses of a human give life
    to ideas, it is like a "reification" of an idea, i mean that an idea
    is alive since it is like reified with the senses of humans that senses
    time and space and matter, so this reification gives the correct meaning
    since you are like reifying with the human senses that gives the
    meaning, and i say that this capacity of this kind of reification with
    the human senses is an emergence that comes from the human biology, so i
    am smart and i will say that the brain is a kind of calculator that
    calculates by using composability with the meanings that come also from
    this kind of reification with the human senses, and i think that
    self-awareness comes from the human senses that senses our ideas of our thinking, and it is what gives consciousness and self-awareness, so now
    you are understanding that what is missing in artificial intelligence is
    this kind of reification with the human senses that render the brain
    much more optimal than artificial intelligence, and i will explain more
    the why of it in my next posts.

    More of my philosophy about the future of artificial intelligence and more..

    I will ask a philosophical question of:

    Can we forecast the future of artificial intelligence ?

    I think i am smart, and i am quickly noticing that connectionism in
    artificial intelligence like with deep learning is not working because
    it is not able to make emerge common sense reasoning, so i invite you to
    read the following article from ScienceDaily so that to notice it, since
    it is speaking about the connectionist models(like the ones of deep
    learning or the transformers that are a kind of deep learning) in
    artificial intelligence:

    https://www.sciencedaily.com/releases/2020/11/201118141702.htm

    Other than that the new following artificial intelligence connectionist
    models like from Microsoft and NVIDIA that are better than GPT-3
    has the same weakness , since i think that they can not make emerge
    common sense reasoning, here they are:

    "Microsoft and Nvidia today announced that they trained what they claim
    is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MT-NLP). The successor to
    the companies’ Turing NLG 17B and Megatron-LM models, MT-NLP contains
    530 billion parameters and achieves “unmatched” accuracy in a broad set
    of natural language tasks, Microsoft and Nvidia say — including reading comprehension, commonsense reasoning, and natural language inferences."

    Read more here:

    https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/

    Because i also said the following:

    I think i am quickly understanding the defects of Megatron-Turing
    Natural Language Generation (MT-NLP) that is better than GPT-3, and it
    is that "self-attention" of the transformers in NLP, even if they scale
    to very long sequences, they have a limited expressiveness, as they
    cannot process input sequentially they can not model hierarchical
    structures and recursion, and hierarchical structure is widely thought
    to be essential to modeling natural language, in particular its syntax,
    so i think that Microsoft Megatron-Turing Natural Language Generation
    (MT-NLP) and GPT-3 too will be practically applied to limited areas, but
    they can not make emerge common sense reasoning or the like that are
    necessary for general artificial intelligence.

    Read the following paper so that to understand the mathematical proof of it:

    https://aclanthology.org/2020.tacl-1.11.pdf

    So i think that the model that will have much more success to or can
    make emerge common sense reasoning is like the following hybrid model in artificial intelligence of connectionism + symbolism that we call COMET,
    read about it here:

    Common Sense Comes Closer to Computers

    https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/

    And here is what i also said about COMET:

    I have just read the following article about neuroevolution
    that is a meta-algorithm in artificial intelligence, an algorithm for
    designing algorithms, i invite you to read about it here:

    https://www.quantamagazine.org/computers-evolve-a-new-path-toward-human-intelligence-20191106/

    So notice that it says the following

    "In neuroevolution, you start by assigning random values to the weights
    between layers. This randomness means the network won’t be very good at
    its job. But from this sorry state, you then create a set of random
    mutations — offspring neural networks with slightly different weights —
    and evaluate their abilities. You keep the best ones, produce more
    offspring, and repeat."

    So i think that the problem with neuroevolution above is that the
    "evaluate the abilities of the offspring neural networks" lacks common
    sense.

    So read the following interesting article that says that artificial intelligence has also brought a kind of common sense to Computers, and
    read about it here:

    https://arxiv.org/abs/1906.05317

    And read about it in the following article:

    "Now, Choi and her collaborators have united these approaches. COMET
    (short for “commonsense transformers”) extends GOFAI-style symbolic reasoning with the latest advances in neural language modeling — a kind
    of deep learning that aims to imbue computers with a statistical “understanding” of written language. COMET works by reimagining common-sense reasoning as a process of generating plausible (if
    imperfect) responses to novel input, rather than making airtight
    deductions by consulting a vast encyclopedia-like database."

    Read more here:

    https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/


    Also the white supremacists and others have to take into account my following thoughts so that to understand correctly:

    And read carefully my following thoughts about Nanotechnology and about Exponential Progress and about genetics:

    https://groups.google.com/g/alt.culture.morocco/c/mjE_2AG1TKQ



    Thank you,
    Amine Moulay Ramdane.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)