• More of my philosophy about real-time systems and about Zettascale and

    From Amine Moulay Ramdane@21:1/5 to All on Tue May 16 10:05:33 2023
    Hello,



    More of my philosophy about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..



    I invite you to look at Kithara RealTime Suite from Germany that is a modular real-time extension for Windows and that looks like RTX64 from USA ( You can read about it here: https://www.intervalzero.com/en-products/en-rtx64/ ), Kithara RealTime Suite
    also supports Delphi programming for it and it also supports C++ and C# , read about Kithara RealTime Suite that is a modular real-time extension for Windows here:


    https://kithara.com/en/products/real-time-suite



    And i am also currently working with Kithara RealTime Suite that is a modular real-time extension for Windows and i am also currently implementing some interesting real-time libraries and programs in Delphi for it. And read my following thoughts so that
    to understand more my views about it:


    Delphi Integration: Kithara RealTime Suite provides seamless integration with Delphi, allowing developers to leverage the Delphi IDE and its extensive component library for real-time development.

    And yes, Delphi can be used for real-time and real-time critical system programming, and so that to enhance the safety and reliability of your Delphi code, here are some suggestions:


    - Adhere to Best Practices: Follow software engineering best practices such as modular design, code reuse, and encapsulation. This can help improve code readability, maintainability, and reduce the potential for errors.

    - Apply Defensive Programming Techniques: Implement defensive programming techniques such as input validation, error handling, and boundary checks. This can help prevent unexpected behaviors, improve robustness, and enhance the safety of your code.

    - Use Code Reviews and Testing: Conduct thorough code reviews to identify and address potential issues. Implement comprehensive testing methodologies, including unit testing, integration testing, and regression testing, to catch bugs and ensure the
    correctness of your code.

    - Apply Design Patterns: Utilize design patterns that promote safety and reliability, such as the Observer pattern, State pattern, or Command pattern. These patterns can help structure your code in a more modular and maintainable way.

    - Employ Static Code Analysis Tools: Utilize static code analysis tools that are compatible with Delphi. These tools can help identify potential issues, enforce coding guidelines, and detect common programming mistakes.

    - Consider Formal Methods: While Delphi may not directly support SPARK or formal verification, you can use external tools or libraries to apply formal methods to critical parts of your codebase. Formal methods involve mathematical verification techniques
    to prove the correctness of software.

    - Documentation and Code Comments: Maintain thorough documentation and meaningful code comments. This can enhance code comprehension, facilitate future maintenance, and aid in understanding the safety measures employed in your code.


    By implementing these practices, you can improve the safety and reliability of your Delphi codebase.


    And you can read about the new version of Delphi and buy it from the following website:

    https://www.embarcadero.com/products/delphi


    More of my philosophy about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    "Businesses in, for example, finance, logistics, and energy, will benefit hugely from quantum’s applications for optimization, simulation, and forecasting. And one potential application of quantum computing that is exciting is in drug discovery and
    diagnostics. These quantum advantages largely depend on quantum’s elevated computing power which could enable physicians and researchers to solve problems which are otherwise intractable with classical computers. Notably, this includes the potential to
    simulate very large, complex molecules, which are actually quantum systems, meaning that a quantum computer can more effectively predict the properties, behaviours and interactions of those molecules at an atomic level. This has huge implications for
    identifying new drug candidates, the future of personalised medicine, and the ability to assess for abnormalities in tissues which cannot be discerned with the naked eye – or with current computational methods."

    Read more here:

    https://www.linkedin.com/pulse/quantum-its-all-computing-stuart-woods/?utm_source=share&utm_medium=member_ios&utm_campaign=share_via&fbclid=IwAR1JC8rIUmzUvD-YcRFRc-iEwRdZZ2rRYfHZcgRih8u8Lm2NO_RRV36WmHI


    And i invite you to read the following interesting article:


    Quantum computers are coming. Get ready for them to change everything

    https://www.zdnet.com/article/quantum-computers-are-coming-get-ready-for-them-to-change-everything/


    And there is also another way of attaining Zettascale and it is with Quantum-classical hybrid systems , and read about it here:

    PREPARING FOR UPCOMING HYBRID CLASSICAL-QUANTUM COMPUTE

    https://www.nextplatform.com/2023/03/23/preparing-for-upcoming-hybrid-classical-quantum-compute/


    And of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the
    following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what


    More of philosophy about China and Exascale supercomputers..

    China has already reached Exascale - on two separate systems

    Read more here:

    https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/


    And in USA Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance

    Read more here:

    https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance


    But Exascale or Zettascale supercomputers will also allow to construct an accurate map of the brain that allows to "reverse" engineer or understand the brain, read the following so that to notice it:

    “If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora, if everything worked beautifully,
    it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer scientist

    Read more here so that to understand:

    https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction


    Also Exascale supercomputers will allow researchers to tackle problems
    which were impossible to simulate using the previous generation of
    machines, due to the massive amounts of data and calculations involved.

    Small modular nuclear reactor (SMR) design, wind farm optimization and
    cancer drug discovery are just a few of the applications that are
    priorities of the U.S. Department of Energy (DOE) Exascale Computing
    Project. The outcomes of this project will have a broad impact and
    promise to fundamentally change society, both in the U.S. and abroad.

    Read more here:

    https://www.cbc.ca/news/opinion/opinion-exascale-computing-1.5382505


    Also the goal of delivering safe, abundant, cheap energy from fusion is
    just one of many challenges in which exascale computing’s power may
    prove decisive. That’s the hope and expectation. Also to know more about
    the other benefits of using Exascale computing power, read more here:

    https://www.hpcwire.com/2019/05/07/ten-great-reasons-among-many-more-to-build-the-1-5-exaflops-frontier/


    And I have just said, before reading the following article, the following about Intel company from USA:


    "And you have to know that in the quarter, Intel’s sales across all product lines fell by 36.2 percent to $11.721 billion, but i think that Intel CEO Pat Gelsinger is still optimistic and he insists that Intel plan to grow a whopping foundry business
    will pay off, and he also believes that the PC market will rebound at some point, and Intel CEO Pat Gelsinger is also optimistic about the process and server processor roadmaps, read more here about it: https://www.nextplatform.com/2023/03/31/finally-
    some-good-news-for-the-intel-xeon-cpu-roadmap/, so i think we have to be optimistic about Intel , and i invite you to read the other following article so that to understand more:


    https://www.theregister.com/2023/04/28/intel_28b_loss/ "



    So you can read carefully the following new article so that you understand more about this subject of the the recovering of the AMD and Intel CPU Market:

    AMD and Intel CPU Market Share Report: Recovery on the Horizon


    https://www.tomshardware.com/news/amd-and-intel-cpu-market-share-report-recovery-looms-on-the-horizon


    And of course, i have just talked about quantum computers in my below previous thoughts, but i think i have to explain something important so that you understand: So for a parallel computer, we need to have one billion different processors. But in a
    quantum computer, a single register can perform a billion computations since a qubit of a register of a quantum computer can be both in two states 1 and 0, this is known as quantum parallelism, but connecting quantum computing to "Moore's Law" is sort of
    foolish -- it's not an all-purpose technique for faster computers, but a limited technique that makes certain types of specialized problems easier, while leaving most of the things we actually use computers for unaffected.


    So I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just talked about artificial intelligence and about my new model of what is consciousness, read about it in my below thoughts, and now i
    will talk about quantum computing, so i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it:

    Quantum Computing: Now widely available!

    https://www.youtube.com/watch?v=laqpfQ8-jFI


    But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, so as you notice in the following paper that many matrix operations and also the linear
    systems of equations solver can be done in a quantum computer, read about it here in the following paper:

    Quantum algorithms for matrix operations and linear systems of equations

    Read more here:

    https://arxiv.org/pdf/2202.04888.pdf


    So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it since i think it will be commercial, but i think that there is a weakness with this kind
    of configuration of the powerful quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for
    many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. and that needs a much faster latency, so i think that the business of classical computers will still be great in the future
    even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business of quantum computers is also dependent on the latency of accessing internet, and speaking about latency , i invite you to look at the following
    interesting video about the latency numbers programmer should know:

    Latency numbers programmer should know

    https://www.youtube.com/watch?v=FqR5vESuKe0



    And IBM set to revolutionize data security with latest quantum-safe technology

    Read more here in the following new article:

    https://interestingengineering.com/innovation/ibm-revolutionizes-data-security-with-quantum-safe-technology


    And I have also just read the following article that says the following:

    "AES-128 and RSA-2048 both provide adequate security against classical attacks, but not against quantum attacks. Doubling the AES key length to 256 results in an acceptable 128 bits of security, while increasing the RSA key by more than a factor of 7.5
    has little effect against quantum attacks."

    Read more here:

    https://techbeacon.com/security/waiting-quantum-computing-why-encryption-has-nothing-worry-about


    So i think that AES-256 encryption is acceptable encryption for quantum computers.


    And Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough, and to give you
    more proof of it, look at the following article from ComputerWorld where Lamont Wood is saying:

    "But using quantum technology with the same throughput, exhausting the possibilities of a 128-bit AES key would take about six months. If a quantum system had to crack a 256-bit key, it would take about as much time as a conventional computer needs to
    crack a 128-bit key.
    A quantum computer could crack a cipher that uses the RSA or EC algorithms almost immediately."

    Read more here on ComputerWorld:

    https://www.computerworld.com/article/2550008/the-clock-is-ticking-for-encryption.html


    And about Symmetric encryption and quantum computers..

    Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough.

    Read more here:

    Is AES-256 Quantum Resistant?

    https://medium.com/@wagslane/is-aes-256-quantum-resistant-d3f776163672


    And it is why i have implemented Parallel AES encryption with 256 bit keys in my following interesting software project called Parallel Archiver, you can read about it and download it from here:

    https://sites.google.com/site/scalable68/parallel-archiver



    More of my philosophy about 3DS ECC RDIMMs and more of my thoughts..


    So there is still one important thing that i want to explain ,
    so as you notice i have just advised you to buy from the below
    cost-effective AMD EPYC Genoa processors with the below good
    motherboard for them from Supermicro, and Supermicro motherboards are known for their high quality, reliability, and flexibility, and are used by many companies in various industries, but of course i have explained
    that the advantage is also that the it supports 12 memory channels,
    but not only that , but the Supermicro motherboard that i am advising you below , supports both DDR5 memory that is not fully ECC and it supports 3DS ECC RDIMM memory that is fully ECC (Error-Correcting Code), but i think that 3DS ECC RDIMM is
    advantageous , since you have to be professional and use 3DS ECC RDIMMs that are "reliable", so you have to read my following thoughts so that to notice it:


    3DS ECC RDIMMs are fully ECC (Error-Correcting Code) memory modules.

    "3DS" stands for "Three Dimensional Stacking" and refers to the technology used in these memory modules to stack multiple layers of memory cells on top of each other. This allows for higher memory densities and capacities in a single module.

    "ECC" stands for "Error-Correcting Code" and is a type of memory technology that can detect and correct errors that may occur when data is stored in memory. ECC memory is commonly used in servers and other mission-critical systems where data integrity is
    of utmost importance.

    Therefore, 3DS ECC RDIMMs combine the benefits of both 3D stacking and ECC technology to provide high-density, high-capacity memory modules with built-in error correction capabilities.


    And you have to read the following article that says the following:

    "On-die ECC: The presence of on-die ECC on DDR5 memory has been the subject of many discussions and a lot of confusion among consumers and the press alike. Unlike standard ECC, on-die ECC primarily aims to improve yields at advanced process nodes,
    thereby allowing for cheaper DRAM chips. On-die ECC only detects errors if they take place within a cell or row during refreshes. When the data is moved from the cell to the cache or the CPU, if there’s a bit-flip or data corruption, it won’t be
    corrected by on-die ECC. Standard ECC corrects data corruption within the cell and as it is moved to another device or an ECC-supported SoC."

    Read more here to notice it:

    https://www.hardwaretimes.com/ddr5-vs-ddr4-ram-quad-channel-and-on-die-ecc-explained/


    So i will say that the new DDR5's on-die ECC can only detect errors that occur within a cell or row during refreshes, and that it may not be able to correct errors that occur when data is moved from the cell to the cache or the CPU.

    DDR5's on-die ECC is designed to detect and correct single-bit errors within a memory cell or row during refresh operations. This is accomplished by adding extra bits to the memory data that are used to detect errors. If an error is detected, the on-die
    ECC mechanism can correct it by using the extra bits to identify and correct the erroneous bit.

    However, if an error occurs when the data is moved from the cell to the cache or the CPU, it may not be corrected by on-die ECC. In such cases, a more robust ECC mechanism may be necessary to ensure data integrity.

    Standard ECC, which is used in some DDR4 memory modules, can correct errors not only within the cell but also when the data is moved to another device or an ECC-supported SoC. This is accomplished by adding extra bits to the memory data that are used to
    detect and correct errors not only within the cell but also during data transfers.

    Overall, while DDR5's on-die ECC mechanism can provide some level of error detection and correction, it may not be sufficient for all use cases. So a more robust ECC mechanism, such as standard ECC, may be necessary.


    So you have to know why i am also advising you to buy from the below new AMD Epyc Genoa CPUs with the below new motherboard that support 12 memory channels, since you have to know that i also know how to do parallel programming, and i also know how to
    program supercomputers, and of course that i am a sophisticated inventor too, but you have to also know why i am advising you to buy from the below new AMD Epyc Genoa CPUs with the below new motherboard that support 12 memory channels, since the 12
    memory channels are really advantageous for doing parallel programming with arrays and with other data structures etc. but not only that, but there is also the High Performance Linpack (HPL) matrix math test that is not particularly memory bound, but it
    is CPU bound, but there is also the High Performance Conjugate Gradients (HPCG) and Stream Triad benchmarks that are memory bound, and the HPCG benchmark generates a synthetic discretized three-dimensional partial differential equation model problem, and
    computes preconditioned conjugate gradient iterations for the resulting sparse linear system, so i think that preconditioned conjugate gradient also takes advantage of the having many more memory channels, and you have to know that Sparse linear system
    solvers, that also uses and solves with preconditioned conjugate gradient, are ubiquitous in high performance computing (HPC) and often are also the most computational intensive parts in scientific computing codes. A few of the many applications relying
    on them include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry.


    More of my philosophy of why to choose AMD EPYC Genoa CPU..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i am advising you to buy from the below cost effective new EPYC 7004 Genoa CPUs from USA and to buy the below
    Supermicro Server Motherboard MBD-H13SSL-N , since it is like an all in one, i mean that it supports many options for both the workstation and the server and it is cost-effective, other than that you have to be smart , since the new AMD Ryzen CPU that
    support Zen 4 architecture just support 2 channels of memory, so it is why i am advising you to buy the from the below new EPYC Genoa CPUs from USA and to buy the below new motherboard, since when you are using multiple threads in parallel programming
    you can parallelize to 12 channels, not just to two channels, so it can give much more bandwidth for parallel programming. So read my previous below thoughts so that to understand more:


    More of my philosophy about the new AMD EPYC Genoa CPU ..


    So if you want to buy a good CPU and a good motherboard , i advise
    you to choose from the following cost effective new EPYC 7004 Genoa CPUs from USA and to choose the following motherboard that supports them:


    The Supermicro Server Motherboard MBD-H13SSL-N Socket SP5 AMD EPYC 7004 Genoa with DDR5 and that comes in ATX format is ready to buy from here (But notice that i think it is in Canadian dollars at newegg.ca, not in US dollars):

    https://www.newegg.ca/p/N82E16813183819?Description=epyc%207004%20cpu&cm_re=epyc_7004%20cpu-_-13-183-819-_-Product


    And notice in the above photo that the new Supermicro Server Motherboard MBD-H13SSL-N has 12 channels, so it is a good motherboard for the new AMD EPYC Genoa CPUs from USA. And of course here is the cost effective AMD EPYC Genoa CPUs that you can buy (
    And of course they support AVX-512):


    The AMD EPYC 9124 is a server/workstation processor with 16 cores, launched in November 2022, at an MSRP(manufacturer's suggested retail price) of $1083. It is part of the EPYC lineup, using the Zen 4 (Genoa) architecture with Socket SP5. Thanks to AMD
    Simultaneous Multithreading (SMT) the core-count is effectively doubled, to 32 threads. Read about it here:

    https://www.techpowerup.com/cpu-specs/epyc-9124.c2917#:~:text=The%20AMD%20EPYC%209124%20is,effectively%20doubled%2C%20to%2032%20threads.


    And the AMD EPYC 9224 is a server/workstation processor with 24 cores, launched in November 2022, at an MSRP(manufacturer's suggested retail price) of $1825. It is part of the EPYC lineup, using the Zen 4 (Genoa) architecture with Socket SP5. Thanks to
    AMD Simultaneous Multithreading (SMT) the core-count is effectively doubled, to 48 threads. Read about it here:

    https://www.techpowerup.com/cpu-specs/epyc-9224.c2919#:~:text=The%20AMD%20EPYC%209224%20is,effectively%20doubled%2C%20to%2048%20threads.


    And the AMD EPYC 9254 is a server/workstation processor with 24 cores, launched in November 2022, at an MSRP(manufacturer's suggested retail price) of $2299, you can read about it here:

    https://www.google.com/search?q=epyc+9254+and+price&rlz=1C1FHFK_enCA976CA976&sxsrf=APwXEddie4lphuwFKUvBwu72p8Iqq6M4XA%3A1683388661428&ei=9XhWZLLSGZSoptQPsfq7uAU&ved=0ahUKEwjyoLu5h-H-AhUUlIkEHTH9DlcQ4dUDCA8&uact=5&oq=epyc+9254+and+price&gs_lcp=
    Cgxnd3Mtd2l6LXNlcnAQAzIECCMQJzoKCAAQRxDWBBCwAzoFCAAQogRKBAhBGABQlAlY-CZgyDNoAXABeACAAX-IAdEBkgEDMS4xmAEAoAEBoAECyAEIwAEB&sclient=gws-wiz-serp



    And you can look carefully at the new AMD EPYC™ 9004 Series Server Processor Specifications here:

    https://colfax-intl.com/servers/amd-epyc-9004-series-comparison



    More of my philosophy about the future Apps and about artificial intelligence and more of my thoughts..



    I have just looked at the following new video of techlead that i know, and i invite you to look at it:

    Why ChatGPT AI Will Destroy Programmers.

    https://www.youtube.com/watch?v=U1flF5WOeNc



    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think that the above techlead in the above video is not thinking correctly since he is saying that software
    programming is dying, since i say that software programming is not dying, since the future Apps are for example Metaverse, and of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s
    Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what

    And also the other future Apps are for example the one that use data to be smart in real time (Read about it here: https://www.cxotoday.com/cxo-bytes/data-driven-smart-apps-are-the-future/ ) , and i also say that software programming is not dying since
    GPT-4 and such artificial intelligence will replace just a small percentage of software programmers, since software programming also needs to care about accuracy and reliability, so you have to look at the following most important limitations of GPT-4
    and such artificial intelligence so that to notice it:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.


    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    And i invite you to read the following article so that to understand more about GPT-4:

    Exploring the Limitations and Potential of OpenAI’s GPT-4

    https://ts2.space/en/exploring-the-limitations-and-potential-of-openais-gpt-4/



    And more of my philosophy about the objective function and about artificial intelligence and about my philosophy and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, i think i am understanding more GPT-4 with my fluid intelligence, so
    i think that GPT-4 uses Deep learning and it uses the mechanism of self-attention so that to understand the context and it uses Reinforcement learning from human feedback that uses a reward mechanism so that
    to learn from the feedback of the people that are using GPT-4 so that
    to ensure that this or that data is truth or not etc. but i think that
    the problem of GPT-4 is that it needs a lot of data and it is the first weakness, and it is dependent on the data and the quality of data and it is the second weakness of GPT-4, so in unsupervised learning that is
    used so that to train GPT-4 on the massive data, the quality of the data is not known with certitude, so it is a weakness of artificial intelligence such as GPT-4, and about the objective function that guides, so i think that it is the the patterns that
    are found by the neural network and that are learned by the neural network of GPT-4 that play the role of the objective function that guides, so the objective function comes from the massive data on wich GPT-4 has been trained, and i think it is also a
    weakness of GPT-4, since i think that what is missing is what explains my new model of what is consciousness , since the meaning from human consciousness also plays the role of the objective function , so it makes it much better than artificial
    intelligence and it makes it that it needs much less data, so it is why the human brain needs much less data than artificial intelligence such as GPT-4. So i invite you to read my following previous thoughts so that to understand my views:



    More of my philosophy about artificial intelligence such as GPT-4 and about my philosophy and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just looked more carefully at GPT-4 , and i think that as i have just explained it, that it will become
    powerful, but it is limited by the data and the quality of the data on wich it has been trained, so if it encounter a new situation to be solved and the solution of it can not be inferred from the data on wich it has been trained, so it will not be
    capable of solving this new situation, so i think that my new model of what is consciousness is explaining that what is lacking is the meaning from human consciousness that permits to solve the problem, so my new model is explaining that artificial
    intelligence such as GPT-4 will not attain artificial general intelligence or AGI, but eventhough , i think that artificial intelligence such as GPT-4 will become powerful, so i think that the problematic in artificial intelligence is about the low level
    layers, so i mean look at assembler programming language, so it is a low level layer than high level programming languages, but you have to notice that the low level layer of assembler programming language can do things that the higher level layer can
    not do, so for example you can play with the stack registers and low level hardware registers and low level hardware instructions etc. and notice how the low level layer like assembler programming can teach you more about the hardware, since it is really
    near the hardware, so i think that it is what is happening in artificial intelligence such as the new GPT-4, i mean that GPT-4 is for example trained on data so that to discover patterns that make it more smart, but the problematic is that this layer of
    how it is trained on the data so that to discover patterns is a high level layer such as the high level programming language, so i think that it is missing the low level layers of what makes the meaning, like the meaning of the past and present and the
    future or the meaning of space and matter and time.. from what you can construct the bigger meaning of other bigger things, so it is why i think that artificial intelligence will not attain artificial general intelligence or AGI, so i think that what is
    lacking in artificial intelligence is what is explaining my new model of what is consciousness, so you can read all my following thoughts in the following web link so that to understand my views about it and about different other subjects:


    https://groups.google.com/g/alt.culture.morocco/c/QSUWwiwN5yo


    Also so that to predict more the technological future, you can
    read my following thoughts about HBM3:

    So, HBM3 offers several enhancements over HBM2E, most notably the doubling of bandwidth from HBM2E at 3.6 Gbps up to 6.4Gbps for HBM3, or 819 GBps of ​​bandwidth per device.



    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)