• More of my philosophy about the law of supply and demand and about GPT-

    From Amine Moulay Ramdane@21:1/5 to All on Thu May 18 13:34:42 2023
    Hello,




    More of my philosophy about the law of supply and demand and about
    GPT-4 about copyright and about GPT-4 and about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale
    and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think there still another problem with ChatGPT and the like artificial intelligence, and it comes from the law
    of supply and demand and it comes from a problem inherent to GPT-4, and here it is: So the demand will say that since ChatGPT is powerful and will rapidly become much more powerful in the near future, so why buy a product or service today with the price
    of today that can be fully or greatly implemented with the soon future much powerful artificial intelligence that will reduce much more the price, since also there will be much more supply than demand. So ChatGPT and the like are causing this serious
    problem to the economic system.


    More of my philosophy about copyright and about GPT-4 and about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and
    about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    I think that the output of ChatGPT can be used freely without seeking permission or obtaining a license, but when you want to do business
    with it, you can not , since it is important to note that the responses generated by ChatGPT may contain information that is protected by copyright law, so you can not know from where the data come from , since
    the data on wich ChatGPT or the like have been trained is a mixture of licensed data, data created by human trainers, and publicly available data, and therefore, you wouldn't be able to copyright the specific outputs of ChatGPT, as they are the result of
    a combination of various data sources and algorithms. And i invite you to read my below previous thoughts about
    GPT-4 and about other subjects of technology so that you understand my views:


    More of my philosophy about the transformer in GPT-4 and about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale
    supercomputers and about quantum computers and about technology and more of my thoughts..


    So GPT-4 is based on transformers , so it is a deep learning model and it is distinguished by its adoption of self-attention, and with self-attention, the network of Transformer pay “attention” to multiple sentences, enabling it to grasp "context"
    and "antecedents", so for example when you say the following sentence:


    "The animal didn't cross the street because it was too tired"


    So we can ask how the artificial intelligence of GPT-4 that uses
    Generative Pre-trained Transformer will understand that the word "it" in
    the above sentence is not the street but the animal, so i say that
    it is with self-attention and attention mechanisms of artificial intelligence and with the training with more and more data an by looking at more and more sentences in the data on wich GPT-4 has been trained, that the transformer can "detect" the
    pattern of the "it" refers to the "animal" in the above sentence, so self-attention and attention of the artificial intelligence of GPT-4 that we call Generative Pre-trained Transformer permit to grasp "context" and "antecedents" too, it is also like
    logically inferring the patterns using self-attention and attention from the context of the many many sentences from the data, and since the data is exponentially growing and since the artificial intelligence of GPT-4 is also generative, so i think it
    will permit to make the artificial intelligence of the transformer of GPT-4 much more powerful, so as you notice that the data is King , and the "generative" word of the Generative Pre-trained Transformer refers to the model's ability to generate text,
    and of course we are now noticing that it is making GPT-4 really useful and powerful, but you have to understand the limitations of GPT-4 by reading carefully my below thoughts, and of course the transformer of GPT-4 is also deep learning , so it is
    the neural network with the many parameters where the patterns, like of the above example of the sentence, will be recorded, and of course the training of the transformer of GPT-4 is unsupervised, and as i just said that GPT-4 will be improved much more
    when it will be trained on a substantially larger amount of data, and considering an article that DeepMind just published a few days ago demonstrating that the performance of these models can be drastically improved by scaling data more aggressively than
    parameters ( Read it here: https://arxiv.org/pdf/2203.15556.pdf ), and of course you have
    to understand that so that to make the transformer of GPT-4 energy efficient so that to scale it correctly, you have to know how to set the number of parameters, so read my below thoughts so that you understand more:


    More of my philosophy about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers
    and about technology and more of my thoughts..


    GPT-4 has a maximum token limit of 32,000 (equivalent to 25,000 words),
    which is a significant increase from GPT-3.5’s 4,000 tokens (equivalent to 3,125 words), and having more tokens in a large language model like GPT-4 provides several benefits, and here they are:

    - Increased Context: More tokens allow the model to consider a larger context when generating responses. This can lead to a better understanding of complex queries and enable more accurate and relevant responses.

    - Longer Conversations: With more tokens, the model can handle longer conversations without truncating or omitting important information. This is particularly useful when dealing with multi-turn conversations or discussions that require a deep
    understanding of the context.

    - Enhanced Coherence: Additional tokens enable the model to maintain a coherent and consistent narrative throughout a conversation. It helps avoid abrupt changes in topic or tone and allows for smoother interactions with users.

    - Improved Accuracy: Having more tokens allows the model to capture finer details and nuances in language. It can lead to more accurate and precise responses, resulting in a higher quality conversational experience.

    - Expanded Knowledge Base: By accommodating more tokens, the model can incorporate a larger knowledge base during training, which can enhance its understanding of various topics and domains. This can result in more informed and insightful responses to a
    wide range of queries.

    - Reduced Information Loss: When a model is constrained by a token limit, it may need to truncate or remove parts of the input text, leading to potential loss of information. Having more tokens minimizes the need for such truncation, helping to preserve
    the integrity of the input and generate more accurate responses.

    - Support for Richer Formatting: Increased token capacity allows for more extensive use of formatting, such as HTML tags or other markup language, to provide visually appealing and structured responses.


    It's important to note that while having more tokens can bring these benefits, it also comes with computational limitations and increased inference time. Finding a balance between token count and computational resources is crucial for practical
    deployment of language models.


    More of my philosophy about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts.
    .



    So i think that GPT-4 has the following limitations:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.

    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    So it is why i think that you can not learn computer programming or software development from GPT-4 , but you have to learn from the ones that know how to do computer programming or software development, so i think that even with GPT-4 or the soon coming
    GPT-5, you can still extract good value from the limitations of GPT-4 or the soon coming GPT-5 and do business with it. So i think we have still to be optimistic about it.


    And i think ChatGPT has another problem, and it is that the generated content can infringe on the copyright of existing works. This could occur if ChatGPT generates content similar to existing copyrighted material of the data on wich it has been trained.
    So you have to be careful, since it can hurt your business, but you have to know that copyright does not protect ideas, concepts, systems, or methods of doing something. But copyright law for example protects the expression of ideas rather than the ideas
    themselves. In other words, copyright law protects the specific form in which an idea is expressed, rather than the underlying idea or concept. And you have to also know that there is another problem with ChatGPT and it is that it can generate an
    invention, but it could be argued that the creators of the model,
    OpenAI, should be able to patent the invention. However, it could also be argued, that the source material used to train the model should be considered as prior art, meaning that the invention would not be considered new and therefore not patentable.


    And I have just looked at the following new video of techlead that i know, and i invite you to look at it:

    Why ChatGPT AI Will Destroy Programmers.

    https://www.youtube.com/watch?v=U1flF5WOeNc



    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think that the above techlead in the above video is not thinking correctly since he is saying that software
    programming is dying, since i say that software programming is not dying, since the future Apps are for example Metaverse, and of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s
    Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what

    And also the other future Apps are for example the one that use data to be smart in real time (Read about it here: https://www.cxotoday.com/cxo-bytes/data-driven-smart-apps-are-the-future/ ) , and i also say that software programming is not dying since
    GPT-4 and such artificial intelligence will replace just a small percentage of software programmers, since software programming also needs to care about accuracy and reliability, so you have to look at the following most important limitations of GPT-4
    and such artificial intelligence so that to notice it:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.


    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    And i invite you to read the following article so that to understand more about GPT-4:

    Exploring the Limitations and Potential of OpenAI’s GPT-4

    https://ts2.space/en/exploring-the-limitations-and-potential-of-openais-gpt-4/



    And more of my philosophy about the objective function and about artificial intelligence and about my philosophy and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, i think i am understanding more GPT-4 with my fluid intelligence, so
    i think that GPT-4 uses Deep learning and it uses the mechanism of self-attention so that to understand the context and it uses Reinforcement learning from human feedback that uses a reward mechanism so that
    to learn from the feedback of the people that are using GPT-4 so that
    to ensure that this or that data is truth or not etc. but i think that
    the problem of GPT-4 is that it needs a lot of data and it is the first weakness, and it is dependent on the data and the quality of data and it is the second weakness of GPT-4, so in unsupervised learning that is
    used so that to train GPT-4 on the massive data, the quality of the data is not known with certitude, so it is a weakness of artificial intelligence such as GPT-4, and about the objective function that guides, so i think that it is the the patterns that
    are found by the neural network and that are learned by the neural network of GPT-4 that play the role of the objective function that guides, so the objective function comes from the massive data on wich GPT-4 has been trained, and i think it is also a
    weakness of GPT-4, since i think that what is missing is what explains my new model of what is consciousness , since the meaning from human consciousness also plays the role of the objective function , so it makes it much better than artificial
    intelligence and it makes it that it needs much less data, so it is why the human brain needs much less data than artificial intelligence such as GPT-4. So i invite you to read my following previous thoughts so that to understand my views:



    More of my philosophy about artificial intelligence such as GPT-4 and about my philosophy and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just looked more carefully at GPT-4 , and i think that as i have just explained it, that it will become
    powerful, but it is limited by the data and the quality of the data on wich it has been trained, so if it encounter a new situation to be solved and the solution of it can not be inferred from the data on wich it has been trained, so it will not be
    capable of solving this new situation, so i think that my new model of what is consciousness is explaining that what is lacking is the meaning from human consciousness that permits to solve the problem, so my new model is explaining that artificial
    intelligence such as GPT-4 will not attain artificial general intelligence or AGI, but eventhough , i think that artificial intelligence such as GPT-4 will become powerful, so i think that the problematic in artificial intelligence is about the low level
    layers, so i mean look at assembler programming language, so it is a low level layer than high level programming languages, but you have to notice that the low level layer of assembler programming language can do things that the higher level layer can
    not do, so for example you can play with the stack registers and low level hardware registers and low level hardware instructions etc. and notice how the low level layer like assembler programming can teach you more about the hardware, since it is really
    near the hardware, so i think that it is what is happening in artificial intelligence such as the new GPT-4, i mean that GPT-4 is for example trained on data so that to discover patterns that make it more smart, but the problematic is that this layer of
    how it is trained on the data so that to discover patterns is a high level layer such as the high level programming language, so i think that it is missing the low level layers of what makes the meaning, like the meaning of the past and present and the
    future or the meaning of space and matter and time.. from what you can construct the bigger meaning of other bigger things, so it is why i think that artificial intelligence will not attain artificial general intelligence or AGI, so i think that what is
    lacking in artificial intelligence is what is explaining my new model of what is consciousness, so you can read all my following thoughts in the following web link so that to understand my views about it and about different other subjects:


    https://groups.google.com/g/alt.culture.morocco/c/QSUWwiwN5yo




    And read my following previous thoughts:



    More of my philosophy about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I think large language model (LLM) such as GPT-4 , or the soon coming GPT-5 , will be enhanced much more, since the data on wich they are trained is exponentially growing in size, and so that to understand more about the statistics that shows the
    exponential growth of data, i invite you to read the following article:


    "As we move from an oil-driven era to a data-driven age that is shaped by the rapid digital transformation of global industries (also known as the “Fourth Industrial Revolution”), data is increasingly becoming voluminous, varied and valuable. The
    global datasphere has expanded and continues to grow at a breakneck speed. In a November 2018 white paper “Data Age 2025”, research firm IDC predicted that the global datasphere could increase from 33 zettabytes in 2018 to 175 zettabytes by 2025 (
    Chart 1)."


    Read more here:

    https://insights.nikkoam.com/articles/2019/12/whats_causing_the_exponential



    Also large language model (LLM) such as GPT-4 are improving in causal reasoning, and here is the proof of it on the following paper:


    "A large language model (LLM) such as GPT-4 can fail on some queries while succeeding to provide causal reasoning in others. What is remarkable is how few times that such errors happen: our evaluation finds that on average, large language models (LLMs)
    such as GPT-4 can outperform state-of-the-art causal algorithms in graph discovery and counterfactual inference, and can systematize nebulous concepts like necessity and sufficiency of cause by operating solely on natural language input."


    And read more here on the following paper so that to understand it:


    Causal reasoning and Large Language Models: Opening a new frontier for causality

    https://arxiv.org/abs/2305.00050?fbclid=IwAR3bvgnYMiB8F8lirnhNmG4RDRmWqDrWetEylgDUNO9f0DP2dWeKdTkQay4



    More of my philosophy about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I invite you to look at Kithara RealTime Suite from Germany that is a modular real-time extension for Windows and that looks like RTX64 from USA ( You can read about it here: https://www.intervalzero.com/en-products/en-rtx64/ ), Kithara RealTime Suite
    also supports Delphi programming for it and it also supports C++ and C# , read about Kithara RealTime Suite that is a modular real-time extension for Windows here:


    https://kithara.com/en/products/real-time-suite



    And i am also currently working with Kithara RealTime Suite that is a modular real-time extension for Windows and i am also currently implementing some interesting real-time libraries and programs in Delphi for it. And read my following thoughts so that
    to understand more my views about it:


    Delphi Integration: Kithara RealTime Suite provides seamless integration with Delphi, allowing developers to leverage the Delphi IDE and its extensive component library for real-time development.

    And yes, Delphi can be used for real-time and real-time critical system programming, and so that to enhance the safety and reliability of your Delphi code, here are some suggestions:


    - Adhere to Best Practices: Follow software engineering best practices such as modular design, code reuse, and encapsulation. This can help improve code readability, maintainability, and reduce the potential for errors.

    - Apply Defensive Programming Techniques: Implement defensive programming techniques such as input validation, error handling, and boundary checks. This can help prevent unexpected behaviors, improve robustness, and enhance the safety of your code.

    - Use Code Reviews and Testing: Conduct thorough code reviews to identify and address potential issues. Implement comprehensive testing methodologies, including unit testing, integration testing, and regression testing, to catch bugs and ensure the
    correctness of your code.

    - Apply Design Patterns: Utilize design patterns that promote safety and reliability, such as the Observer pattern, State pattern, or Command pattern. These patterns can help structure your code in a more modular and maintainable way.

    - Employ Static Code Analysis Tools: Utilize static code analysis tools that are compatible with Delphi. These tools can help identify potential issues, enforce coding guidelines, and detect common programming mistakes.

    - Consider Formal Methods: While Delphi may not directly support SPARK or formal verification, you can use external tools or libraries to apply formal methods to critical parts of your codebase. Formal methods involve mathematical verification techniques
    to prove the correctness of software.

    - Documentation and Code Comments: Maintain thorough documentation and meaningful code comments. This can enhance code comprehension, facilitate future maintenance, and aid in understanding the safety measures employed in your code.


    By implementing these practices, you can improve the safety and reliability of your Delphi codebase.


    And you can read about the new version of Delphi and buy it from the following website:

    https://www.embarcadero.com/products/delphi


    More of my philosophy about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    "Businesses in, for example, finance, logistics, and energy, will benefit hugely from quantum’s applications for optimization, simulation, and forecasting. And one potential application of quantum computing that is exciting is in drug discovery and
    diagnostics. These quantum advantages largely depend on quantum’s elevated computing power which could enable physicians and researchers to solve problems which are otherwise intractable with classical computers. Notably, this includes the potential to
    simulate very large, complex molecules, which are actually quantum systems, meaning that a quantum computer can more effectively predict the properties, behaviours and interactions of those molecules at an atomic level. This has huge implications for
    identifying new drug candidates, the future of personalised medicine, and the ability to assess for abnormalities in tissues which cannot be discerned with the naked eye – or with current computational methods."

    Read more here:

    https://www.linkedin.com/pulse/quantum-its-all-computing-stuart-woods/?utm_source=share&utm_medium=member_ios&utm_campaign=share_via&fbclid=IwAR1JC8rIUmzUvD-YcRFRc-iEwRdZZ2rRYfHZcgRih8u8Lm2NO_RRV36WmHI


    And i invite you to read the following interesting article:


    Quantum computers are coming. Get ready for them to change everything

    https://www.zdnet.com/article/quantum-computers-are-coming-get-ready-for-them-to-change-everything/


    And there is also another way of attaining Zettascale and it is with Quantum-classical hybrid systems , and read about it here:

    PREPARING FOR UPCOMING HYBRID CLASSICAL-QUANTUM COMPUTE

    https://www.nextplatform.com/2023/03/23/preparing-for-upcoming-hybrid-classical-quantum-compute/


    And of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the
    following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what


    More of philosophy about China and Exascale supercomputers..

    China has already reached Exascale - on two separate systems

    Read more here:

    https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/


    And in USA Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance

    Read more here:

    https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance


    But Exascale or Zettascale supercomputers will also allow to construct an accurate map of the brain that allows to "reverse" engineer or understand the brain, read the following so that to notice it:

    “If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora, if everything worked beautifully,
    it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer scientist

    Read more here so that to understand:

    https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction


    Also Exascale supercomputers will allow researchers to tackle problems
    which were impossible to simulate using the previous generation of
    machines, due to the massive amounts of data and calculations involved.

    Small modular nuclear reactor (SMR) design, wind farm optimization and
    cancer drug discovery are just a few of the applications that are
    priorities of the U.S. Department of Energy (DOE) Exascale Computing
    Project. The outcomes of this project will have a broad impact and
    promise to fundamentally change society, both in the U.S. and abroad.

    Read more here:

    https://www.cbc.ca/news/opinion/opinion-exascale-computing-1.5382505


    Also the goal of delivering safe, abundant, cheap energy from fusion is
    just one of many challenges in which exascale computing’s power may
    prove decisive. That’s the hope and expectation. Also to know more about
    the other benefits of using Exascale computing power, read more here:

    https://www.hpcwire.com/2019/05/07/ten-great-reasons-among-many-more-to-build-the-1-5-exaflops-frontier/


    And I have just said, before reading the following article, the following about Intel company from USA:


    "And you have to know that in the quarter, Intel’s sales across all product lines fell by 36.2 percent to $11.721 billion, but i think that Intel CEO Pat Gelsinger is still optimistic and he insists that Intel plan to grow a whopping foundry business
    will pay off, and he also believes that the PC market will rebound at some point, and Intel CEO Pat Gelsinger is also optimistic about the process and server processor roadmaps, read more here about it: https://www.nextplatform.com/2023/03/31/finally-
    some-good-news-for-the-intel-xeon-cpu-roadmap/, so i think we have to be optimistic about Intel , and i invite you to read the other following article so that to understand more:


    https://www.theregister.com/2023/04/28/intel_28b_loss/ "



    So you can read carefully the following new article so that you understand more about this subject of the the recovering of the AMD and Intel CPU Market:

    AMD and Intel CPU Market Share Report: Recovery on the Horizon


    https://www.tomshardware.com/news/amd-and-intel-cpu-market-share-report-recovery-looms-on-the-horizon


    And of course, i have just talked about quantum computers in my below previous thoughts, but i think i have to explain something important so that you understand: So for a parallel computer, we need to have one billion different processors. But in a
    quantum computer, a single register can perform a billion computations since a qubit of a register of a quantum computer can be both in two states 1 and 0, this is known as quantum parallelism, but connecting quantum computing to "Moore's Law" is sort of
    foolish -- it's not an all-purpose technique for faster computers, but a limited technique that makes certain types of specialized problems easier, while leaving most of the things we actually use computers for unaffected.


    So I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just talked about artificial intelligence and about my new model of what is consciousness, read about it in my below thoughts, and now i
    will talk about quantum computing, so i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it:

    Quantum Computing: Now widely available!

    https://www.youtube.com/watch?v=laqpfQ8-jFI


    But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, so as you notice in the following paper that many matrix operations and also the linear
    systems of equations solver can be done in a quantum computer, read about it here in the following paper:

    Quantum algorithms for matrix operations and linear systems of equations

    Read more here:

    https://arxiv.org/pdf/2202.04888.pdf


    So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it since i think it will be commercial, but i think that there is a weakness with this kind
    of configuration of the powerful quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for
    many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. and that needs a much faster latency, so i think that the business of classical computers will still be great in the future
    even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business of quantum computers is also dependent on the latency of accessing internet, and speaking about latency , i invite you to look at the following
    interesting video about the latency numbers programmer should know:

    Latency numbers programmer should know


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)