• More of my philosophy about the strategy and about Russia and about the

    From Amine Moulay Ramdane@21:1/5 to All on Thu May 18 16:30:21 2023
    Hello,



    More of my philosophy about the strategy and about Russia and about the solution to a problem of ChatGPT and about GPT-4 and about copyright and about GPT-4 and about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and
    about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..



    So i have just looked to the following french video of a french
    general that is talking about Russia:


    UKRAINE, RUSSIE, OTAN : LE GÉNÉRAL VINCENT DESPORTES DÉNONCE NOTRE STRATÉGIE DE GUERRE


    https://www.youtube.com/watch?v=eNgtlTWcmDw



    So I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think that the above general is following the complex path, so i mean that he is not simplifying correctly
    the problem so that we know how to solve , so i mean that the problematic is that Russia has not to be humiliated by NATO, so it is why it has not wanted NATO in Ukraine, so it is an important fact, so it is why you have to know how to understand the
    rest, so i mean that my methodology is that we have not to humiliate Russia and we have to know how to to make Russia understand the way of Diplomacy, so i mean that Russia has to defend Russians inside Ukraine since Russians inside Ukraine can no more
    live in peace with Ukrainians, so it is why you have to be strategic, i mean that NATO has not to humiliate Russia by making Ukraine part of NATO, and we have to solve the other problem of Russians inside Ukraine that can no more live in peace with
    Ukrainians. So then my way of doing is that you have to know how to make follow the good philosophy by also the way of Diplomacy. So i invite you to read my following previous thoughts so that you understand my views:



    More of my philosophy about Russia and about the solution to a problem of ChatGPT and about GPT-4 and about copyright and about GPT-4 and about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and about the value and
    about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I think i am a new philosopher, and i will now talk in a more philosophical way about the war between Russia and Ukraine, so i think that the war between Russia and Ukraine is philosophically a localized war, i mean that Ukraine population has around 18%
    of Russians , and Russia knows that it has to defend those Russians inside Ukraine, and Russia knows that those Russians inside Ukraine can no more live in peace with the Ukrainians, so it is why i think it is philosophically a localized war, other than
    that i think that racial nationalism is no more the good standard, and it is why i think that white supremacism or neo-nazism is an archaism, so Russia has not to revert to racial nationalism , but it has to advance in a smart way, since the modern way
    of doing is the what we call "patriotism", so from where comes Patriotism of Russia?, so patriotism of Russia comes also from the following: so we have to look at the GDP per capita of Russia, so i think it was around 12,194.78 USD in year 2021, so it is
    not so bad, since i think that Russia will rapidly increase much more its GDP per capita by its economic growth, also Russia has to smartly invest in its R&D (Research and development), other than that i think that morality is also an important issue,
    so i think that Russia is smart and i think it will tune its morality in a smart way by using laws etc, other than that i invite you to read my following thoughts in the following web link about my new ideas of my philosophy and about other subjects of
    my philosophy so that you understand my views:

    https://groups.google.com/g/alt.culture.morocco/c/01xKREkkt-A



    More precision of my philosophy about the solution to a problem of ChatGPT and about GPT-4 and about copyright and about GPT-4 and about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and about the value and about
    large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    So i have just written the following:

    "I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think there still another problem with ChatGPT and the like artificial intelligence, and it comes from the
    law of supply and demand and it comes from a problem inherent to GPT-4, and here it is: So the demand will say that since ChatGPT is powerful and will rapidly become much more powerful in the near future, so why buy a product or service today with the
    price of today that can be fully or greatly implemented with the soon future much powerful artificial intelligence that will reduce much more the price, since also there will be much more supply than demand. So ChatGPT and the like are causing this
    serious problem to the economic system. "



    But i think i am smart, and i say that there is a solution to the above problem, and it is that large language model (LLM) such as GPT-4 have an inherent disadvantage, and it is that they are limited by the patterns that they discover in the data on wich
    they have been trained, so the problem is that the large language model (LLM) such as GPT-4 can innovate with the patterns that it has discovered , but it can not innovate in general, since it doesn't understand the meaning as humans are understanding it,
    so in my new new model of what is consciousness below, i am also explaining how the human meaning is constructed, and my new model says that artificial intelligence will not attain artificial general intelligence, but it will become powerful, so then i
    can logically infer from my above reasoning that the price of the innovative products or services will not be reduced as is saying it the above description of the problem, so i think that the above problem is solved, but of course we have to make people "
    conscious" about my explanation of how this problem is solved, so that the economic system works correctly.



    More of my philosophy about copyright and about GPT-4 and about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and
    about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I think that the output of ChatGPT can be used freely without seeking permission or obtaining a license, but when you want to do business
    with it, you can not , since it is important to note that the responses generated by ChatGPT may contain information that is protected by copyright law, so you can not know from where the data come from , since
    the data on wich ChatGPT or the like have been trained is a mixture of licensed data, data created by human trainers, and publicly available data, and therefore, you wouldn't be able to copyright the specific outputs of ChatGPT, as they are the result of
    a combination of various data sources and algorithms. And i invite you to read my below previous thoughts about
    GPT-4 and about other subjects of technology so that you understand my views:


    More of my philosophy about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale
    supercomputers and about quantum computers and about technology and more of my thoughts..


    So GPT-4 is based on transformers , so it is a deep learning model and it is distinguished by its adoption of self-attention, and with self-attention, the network of Transformer pay “attention” to multiple sentences, enabling it to grasp "context"
    and "antecedents", so for example when you say the following sentence:


    "The animal didn't cross the street because it was too tired"


    So we can ask how the artificial intelligence of GPT-4 that uses
    Generative Pre-trained Transformer will understand that the word "it" in
    the above sentence is not the street but the animal, so i say that
    it is with self-attention and attention mechanisms of artificial intelligence and with the training with more and more data an by looking at more and more sentences in the data on wich GPT-4 has been trained, that the transformer can "detect" the
    pattern of the "it" refers to the "animal" in the above sentence, so self-attention and attention of the artificial intelligence of GPT-4 that we call Generative Pre-trained Transformer permit to grasp "context" and "antecedents" too, it is also like
    logically inferring the patterns using self-attention and attention from the context of the many many sentences from the data, and since the data is exponentially growing and since the artificial intelligence of GPT-4 is also generative, so i think it
    will permit to make the artificial intelligence of the transformer of GPT-4 much more powerful, so as you notice that the data is King , and the "generative" word of the Generative Pre-trained Transformer refers to the model's ability to generate text,
    and of course we are now noticing that it is making GPT-4 really useful and powerful, but you have to understand the limitations of GPT-4 by reading carefully my below thoughts, and of course the transformer of GPT-4 is also deep learning , so it is
    the neural network with the many parameters where the patterns, like of the above example of the sentence, will be recorded, and of course the training of the transformer of GPT-4 is unsupervised, and as i just said that GPT-4 will be improved much more
    when it will be trained on a substantially larger amount of data, and considering an article that DeepMind just published a few days ago demonstrating that the performance of these models can be drastically improved by scaling data more aggressively than
    parameters ( Read it here: https://arxiv.org/pdf/2203.15556.pdf ), and of course you have
    to understand that so that to make the transformer of GPT-4 energy efficient so that to scale it correctly, you have to know how to set the number of parameters, so read my below thoughts so that you understand more:


    More of my philosophy about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers
    and about technology and more of my thoughts..


    GPT-4 has a maximum token limit of 32,000 (equivalent to 25,000 words),
    which is a significant increase from GPT-3.5’s 4,000 tokens (equivalent to 3,125 words), and having more tokens in a large language model like GPT-4 provides several benefits, and here they are:

    - Increased Context: More tokens allow the model to consider a larger context when generating responses. This can lead to a better understanding of complex queries and enable more accurate and relevant responses.

    - Longer Conversations: With more tokens, the model can handle longer conversations without truncating or omitting important information. This is particularly useful when dealing with multi-turn conversations or discussions that require a deep
    understanding of the context.

    - Enhanced Coherence: Additional tokens enable the model to maintain a coherent and consistent narrative throughout a conversation. It helps avoid abrupt changes in topic or tone and allows for smoother interactions with users.

    - Improved Accuracy: Having more tokens allows the model to capture finer details and nuances in language. It can lead to more accurate and precise responses, resulting in a higher quality conversational experience.

    - Expanded Knowledge Base: By accommodating more tokens, the model can incorporate a larger knowledge base during training, which can enhance its understanding of various topics and domains. This can result in more informed and insightful responses to a
    wide range of queries.

    - Reduced Information Loss: When a model is constrained by a token limit, it may need to truncate or remove parts of the input text, leading to potential loss of information. Having more tokens minimizes the need for such truncation, helping to preserve
    the integrity of the input and generate more accurate responses.

    - Support for Richer Formatting: Increased token capacity allows for more extensive use of formatting, such as HTML tags or other markup language, to provide visually appealing and structured responses.


    It's important to note that while having more tokens can bring these benefits, it also comes with computational limitations and increased inference time. Finding a balance between token count and computational resources is crucial for practical
    deployment of language models.


    More of my philosophy about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts.
    .



    So i think that GPT-4 has the following limitations:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.

    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    So it is why i think that you can not learn computer programming or software development from GPT-4 , but you have to learn from the ones that know how to do computer programming or software development, so i think that even with GPT-4 or the soon coming
    GPT-5, you can still extract good value from the limitations of GPT-4 or the soon coming GPT-5 and do business with it. So i think we have still to be optimistic about it.


    And i think ChatGPT has another problem, and it is that the generated content can infringe on the copyright of existing works. This could occur if ChatGPT generates content similar to existing copyrighted material of the data on wich it has been trained.
    So you have to be careful, since it can hurt your business, but you have to know that copyright does not protect ideas, concepts, systems, or methods of doing something. But copyright law for example protects the expression of ideas rather than the ideas
    themselves. In other words, copyright law protects the specific form in which an idea is expressed, rather than the underlying idea or concept. And you have to also know that there is another problem with ChatGPT and it is that it can generate an
    invention, but it could be argued that the creators of the model,
    OpenAI, should be able to patent the invention. However, it could also be argued, that the source material used to train the model should be considered as prior art, meaning that the invention would not be considered new and therefore not patentable.


    And I have just looked at the following new video of techlead that i know, and i invite you to look at it:

    Why ChatGPT AI Will Destroy Programmers.

    https://www.youtube.com/watch?v=U1flF5WOeNc



    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think that the above techlead in the above video is not thinking correctly since he is saying that software
    programming is dying, since i say that software programming is not dying, since the future Apps are for example Metaverse, and of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s
    Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what

    And also the other future Apps are for example the one that use data to be smart in real time (Read about it here: https://www.cxotoday.com/cxo-bytes/data-driven-smart-apps-are-the-future/ ) , and i also say that software programming is not dying since
    GPT-4 and such artificial intelligence will replace just a small percentage of software programmers, since software programming also needs to care about accuracy and reliability, so you have to look at the following most important limitations of GPT-4
    and such artificial intelligence so that to notice it:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.


    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    And i invite you to read the following article so that to understand more about GPT-4:

    Exploring the Limitations and Potential of OpenAI’s GPT-4

    https://ts2.space/en/exploring-the-limitations-and-potential-of-openais-gpt-4/



    And more of my philosophy about the objective function and about artificial intelligence and about my philosophy and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, i think i am understanding more GPT-4 with my fluid intelligence, so
    i think that GPT-4 uses Deep learning and it uses the mechanism of self-attention so that to understand the context and it uses Reinforcement learning from human feedback that uses a reward mechanism so that
    to learn from the feedback of the people that are using GPT-4 so that
    to ensure that this or that data is truth or not etc. but i think that
    the problem of GPT-4 is that it needs a lot of data and it is the first weakness, and it is dependent on the data and the quality of data and it is the second weakness of GPT-4, so in unsupervised learning that is
    used so that to train GPT-4 on the massive data, the quality of the data is not known with certitude, so it is a weakness of artificial intelligence such as GPT-4, and about the objective function that guides, so i think that it is the the patterns that
    are found by the neural network and that are learned by the neural network of GPT-4 that play the role of the objective function that guides, so the objective function comes from the massive data on wich GPT-4 has been trained, and i think it is also a
    weakness of GPT-4, since i think that what is missing is what explains my new model of what is consciousness , since the meaning from human consciousness also plays the role of the objective function , so it makes it much better than artificial
    intelligence and it makes it that it needs much less data, so it is why the human brain needs much less data than artificial intelligence such as GPT-4. So i invite you to read my following previous thoughts so that to understand my views:



    More of my philosophy about artificial intelligence such as GPT-4 and about my philosophy and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just looked more carefully at GPT-4 , and i think that as i have just explained it, that it will become
    powerful, but it is limited by the data and the quality of the data on wich it has been trained, so if it encounter a new situation to be solved and the solution of it can not be inferred from the data on wich it has been trained, so it will not be
    capable of solving this new situation, so i think that my new model of what is consciousness is explaining that what is lacking is the meaning from human consciousness that permits to solve the problem, so my new model is explaining that artificial
    intelligence such as GPT-4 will not attain artificial general intelligence or AGI, but eventhough , i think that artificial intelligence such as GPT-4 will become powerful, so i think that the problematic in artificial intelligence is about the low level
    layers, so i mean look at assembler programming language, so it is a low level layer than high level programming languages, but you have to notice that the low level layer of assembler programming language can do things that the higher level layer can
    not do, so for example you can play with the stack registers and low level hardware registers and low level hardware instructions etc. and notice how the low level layer like assembler programming can teach you more about the hardware, since it is really
    near the hardware, so i think that it is what is happening in artificial intelligence such as the new GPT-4, i mean that GPT-4 is for example trained on data so that to discover patterns that make it more smart, but the problematic is that this layer of
    how it is trained on the data so that to discover patterns is a high level layer such as the high level programming language, so i think that it is missing the low level layers of what makes the meaning, like the meaning of the past and present and the
    future or the meaning of space and matter and time.. from what you can construct the bigger meaning of other bigger things, so it is why i think that artificial intelligence will not attain artificial general intelligence or AGI, so i think that what is
    lacking in artificial intelligence is what is explaining my new model of what is consciousness, so you can read all my following thoughts in the following web link so that to understand my views about it and about different other subjects:


    https://groups.google.com/g/alt.culture.morocco/c/QSUWwiwN5yo




    And read my following previous thoughts:



    More of my philosophy about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I think large language model (LLM) such as GPT-4 , or the soon coming GPT-5 , will be enhanced much more, since the data on wich they are trained is exponentially growing in size, and so that to understand more about the statistics that shows the
    exponential growth of data, i invite you to read the following article:


    "As we move from an oil-driven era to a data-driven age that is shaped by the rapid digital transformation of global industries (also known as the “Fourth Industrial Revolution”), data is increasingly becoming voluminous, varied and valuable. The
    global datasphere has expanded and continues to grow at a breakneck speed. In a November 2018 white paper “Data Age 2025”, research firm IDC predicted that the global datasphere could increase from 33 zettabytes in 2018 to 175 zettabytes by 2025 (
    Chart 1)."


    Read more here:

    https://insights.nikkoam.com/articles/2019/12/whats_causing_the_exponential



    Also large language model (LLM) such as GPT-4 are improving in causal reasoning, and here is the proof of it on the following paper:


    "A large language model (LLM) such as GPT-4 can fail on some queries while succeeding to provide causal reasoning in others. What is remarkable is how few times that such errors happen: our evaluation finds that on average, large language models (LLMs)
    such as GPT-4 can outperform state-of-the-art causal algorithms in graph discovery and counterfactual inference, and can systematize nebulous concepts like necessity and sufficiency of cause by operating solely on natural language input."


    And read more here on the following paper so that to understand it:


    Causal reasoning and Large Language Models: Opening a new frontier for causality

    https://arxiv.org/abs/2305.00050?fbclid=IwAR3bvgnYMiB8F8lirnhNmG4RDRmWqDrWetEylgDUNO9f0DP2dWeKdTkQay4



    More of my philosophy about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I invite you to look at Kithara RealTime Suite from Germany that is a modular real-time extension for Windows and that looks like RTX64 from USA ( You can read about it here: https://www.intervalzero.com/en-products/en-rtx64/ ), Kithara RealTime Suite
    also supports Delphi programming for it and it also supports C++ and C# , read about Kithara RealTime Suite that is a modular real-time extension for Windows here:


    https://kithara.com/en/products/real-time-suite



    And i am also currently working with Kithara RealTime Suite that is a modular real-time extension for Windows and i am also currently implementing some interesting real-time libraries and programs in Delphi for it. And read my following thoughts so that
    to understand more my views about it:


    Delphi Integration: Kithara RealTime Suite provides seamless integration with Delphi, allowing developers to leverage the Delphi IDE and its extensive component library for real-time development.

    And yes, Delphi can be used for real-time and real-time critical system programming, and so that to enhance the safety and reliability of your Delphi code, here are some suggestions:


    - Adhere to Best Practices: Follow software engineering best practices such as modular design, code reuse, and encapsulation. This can help improve code readability, maintainability, and reduce the potential for errors.

    - Apply Defensive Programming Techniques: Implement defensive programming techniques such as input validation, error handling, and boundary checks. This can help prevent unexpected behaviors, improve robustness, and enhance the safety of your code.

    - Use Code Reviews and Testing: Conduct thorough code reviews to identify and address potential issues. Implement comprehensive testing methodologies, including unit testing, integration testing, and regression testing, to catch bugs and ensure the
    correctness of your code.

    - Apply Design Patterns: Utilize design patterns that promote safety and reliability, such as the Observer pattern, State pattern, or Command pattern. These patterns can help structure your code in a more modular and maintainable way.

    - Employ Static Code Analysis Tools: Utilize static code analysis tools that are compatible with Delphi. These tools can help identify potential issues, enforce coding guidelines, and detect common programming mistakes.

    - Consider Formal Methods: While Delphi may not directly support SPARK or formal verification, you can use external tools or libraries to apply formal methods to critical parts of your codebase. Formal methods involve mathematical verification techniques
    to prove the correctness of software.

    - Documentation and Code Comments: Maintain thorough documentation and meaningful code comments. This can enhance code comprehension, facilitate future maintenance, and aid in understanding the safety measures employed in your code.


    By implementing these practices, you can improve the safety and reliability of your Delphi codebase.


    And you can read about the new version of Delphi and buy it from the following website:

    https://www.embarcadero.com/products/delphi


    More of my philosophy about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    "Businesses in, for example, finance, logistics, and energy, will benefit hugely from quantum’s applications for optimization, simulation, and forecasting. And one potential application of quantum computing that is exciting is in drug discovery and
    diagnostics. These quantum advantages largely depend on quantum’s elevated computing power which could enable physicians and researchers to solve problems which are otherwise intractable with classical computers. Notably, this includes the potential to
    simulate very large, complex molecules, which are actually quantum systems, meaning that a quantum computer can more effectively predict the properties, behaviours and interactions of those molecules at an atomic level. This has huge implications for
    identifying new drug candidates, the future of personalised medicine, and the ability to assess for abnormalities in tissues which cannot be discerned with the naked eye – or with current computational methods."

    Read more here:

    https://www.linkedin.com/pulse/quantum-its-all-computing-stuart-woods/?utm_source=share&utm_medium=member_ios&utm_campaign=share_via&fbclid=IwAR1JC8rIUmzUvD-YcRFRc-iEwRdZZ2rRYfHZcgRih8u8Lm2NO_RRV36WmHI


    And i invite you to read the following interesting article:


    Quantum computers are coming. Get ready for them to change everything

    https://www.zdnet.com/article/quantum-computers-are-coming-get-ready-for-them-to-change-everything/


    And there is also another way of attaining Zettascale and it is with Quantum-classical hybrid systems , and read about it here:

    PREPARING FOR UPCOMING HYBRID CLASSICAL-QUANTUM COMPUTE

    https://www.nextplatform.com/2023/03/23/preparing-for-upcoming-hybrid-classical-quantum-compute/


    And of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the
    following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what


    More of philosophy about China and Exascale supercomputers..

    China has already reached Exascale - on two separate systems

    Read more here:

    https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/


    And in USA Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance

    Read more here:

    https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance


    But Exascale or Zettascale supercomputers will also allow to construct an accurate map of the brain that allows to "reverse" engineer or understand the brain, read the following so that to notice it:

    “If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora, if everything worked beautifully,
    it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer scientist

    Read more here so that to understand:

    https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction



    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From H e@21:1/5 to All on Tue May 23 21:04:56 2023
    You have not lost Your head ?????????????????????????????????????????

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)