• More of my philosophy about the Simulation hypothesis and about my phil

    From Amine Moulay Ramdane@21:1/5 to All on Fri Oct 27 19:07:25 2023
    Hello,


    More of my philosophy about the Simulation hypothesis and about my philosophy and about the Chip startup Tachyum and about the future technologies and about artificial intelligence and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    As you have just noticed, i have just posted about a new article about the Simulation hypothesis , so i think that the Simulation hypothesis is not true ,
    since from my below new model of what is human consciousness, you can notice that there is a primitive consciousness , but i am talking below about the rest of the human consciousness that requires the biologic since it is also composed with the human "
    sensations" , so since it is also biologic , so i don't think that the Simulation hypothesis is true , so i invite you to read carefully my below new model of what is human consciousness and about
    how it explains that we will not attain artificial general intelligence:


    So i think i am also a new philosopher, so you have to discover an important pattern with your fluid intelligence , and here it is: So the problem is that you can think that competition in capitalism is on the cost and/or quality and/or quantity, so
    since it is also on the cost so it can hurt the wages of workers, but it think it is not the smart way to look at it, since capitalism is not a zero sum game, since with a reasonable level of growth, capitalism can both deliver rising living standards
    for most, and still ensure a relatively high rate of return to the owners of capital, so you have to look at it in a holistic way
    or in a general way and say the even if there is competition in capitalism on for example the cost and it can hurt the wages locally here and there, but holistically or generally capitalism is not a zero sum game so it is holistically or generally a
    positive thing even inside a country, even if competition on for example the cost in capitalism can hurt the wages of the workers in a local manner here and there inside the country, so the smart idea is that you have to know how to adapt efficiently so
    that to minimize at best the disadvantages of competition in capitalism that can hurt locally here and there the wages of the workers. So then the important key is efficient adaptation. Other than that you have to read about the other problems in
    capitalism that have to be solved by "progressive" capitalism in my the following thoughts in the following web link:

    https://groups.google.com/g/alt.culture.morocco/c/KN4V-rGRDdU


    So i think i am a new philosopher , and so that to understand correctly my
    new ideas of my philosophy , i invite you to read them carefully in the following two web links:

    https://groups.google.com/g/alt.culture.morocco/c/v8pdduG6LcY

    https://groups.google.com/g/alt.culture.morocco/c/xcWGmtgEu6o


    And you can read my new proverbs in the following web link:

    https://groups.google.com/g/alt.culture.morocco/c/PfmfrhCk2cU


    Chip startup Tachyum will make 50 Exaflop supercomputers and 8 ZettaFlop AI computers in 2025, and Tachyum books purchase order to build system with 25,000x ChatGPT4 capacity and 25x faster than current supercomputers.

    Read more here in the following interesting article so that to understand it:

    https://www.tachyum.com/media/press-releases/2023/10/03/tachyum-books-purchase-order-to-build-system-with-25000x-chatgpt4-capacity-and-25x-faster-than-current-supercomputers/


    And Exascale or Zettascale supercomputers will also allow to construct an accurate map of the brain that allows to "reverse" engineer or understand the brain, read the following so that to notice it:

    “If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora of 2 ExaFLOPS, if everything worked beautifully, it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer
    scientist

    Read more here so that to understand:

    https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction


    So then the 50 Exaflop supercomputers with the Chip made from the Chip startup Tachyum will take only 40 days for the compute time for a whole mouse brain.


    7 TB of data for 10,000 years: the glass plate that will revolutionize storage

    "This new storage technology could be capable of accumulating up to 7 TB of data and maintaining its integrity for 10,000 years . We are still far from Seagate's HAMR disks capable of containing 30 TB of data , but the properties of the Silica project
    make it a real alternative to current storage."

    Read more here (And you can translate it from french to english):

    https://www.tomshardware.fr/7-to-de-donnees-pour-10-000-ans-la-plaque-de-verre-qui-va-revolutionner-le-stockage/


    But there is another interesting revolutionary technology that is coming , and here it is:

    Now Ultra Ram is actually here to make everything faster and it is going to turn your SSD into a really fast device because it doesn't need power to retain the information, and it has 4,000X more endurance than NAND and can store data for 1,000+ years,
    and it is also designed to have 1/10th the latency of DRAM and be more energy efficient (by a factor of 100X) than DRAM fabricated on a similar node, and it can be used as RAM because it is so fast. So i invite to read the following new article about it:

    UltraRAM Demos Prototype Chip, Secures Funding to Validate Commercial Potential

    https://www.tomshardware.com/news/ultraram-demos-prototype-chip-secures-funding-to-validate-commercial-potential


    Also i invite you to read my interesting new thoughts about the new and future technologies in the following web link :

    https://groups.google.com/g/alt.culture.morocco/c/lfnlD52jDzI


    And you can read the rest my other thoughts and my thoughts of my philosophy in the following web link:

    https://groups.google.com/g/alt.culture.morocco/c/yl5wDoNdlnA


    So i think what is so important is also to avoid the damages to our civilization that can be caused by severe solar storms, so notice how i am giving the probabilities of it below , but you have to be careful and be sure that we be safe from the damages
    of the severe solar storms, so read my following thoughts about it:


    I have just read the following new article on Interesting Engineering , and i invite you to read it:

    https://interestingengineering.com/science/huge-solar-storm-proof-found-in-14300-year-old-tree-rings


    So the above article is not so precise, so here is more information so that
    you know about the probability of it:

    A study published in 2019 found the chance of a Carrington-like event occurring before 2029 is less than 1.9 percent.

    Are we ready for the next Big Solar storm?

    Read more here:

    https://www.discovermagazine.com/the-sciences/are-we-ready-for-the-next-big-solar-storm


    And based on available data, there's roughly a one percent chance of seeing another Miyake Event within the next decade and a Miyake Event can be
    80 times more powerful than a Carrington-like event, read more here in the following article:

    https://phys.org/news/2022-10-tree-insight-devastating-storms.html#:~:text='%20or%20outburst.%22-,Dr.,what%20harms%20it%20may%20cause.


    University of Tübingen's researchers unveil a groundbreaking solar cell for decentralized green hydrogen production, revolutionizing renewable energy. This breakthrough technology opens the door to large-scale applications, even with lower efficiencies.
    This advancement have the potential to make a significant contribution to energy supply and the reduction of CO2 emissions on a global scale.


    Read more here on Interesting Engineering:

    https://interestingengineering.com/science/solar-cell-powers-green-hydrogen-production


    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ, so of course
    so that to know from where comes the human being, i am explaining
    below the genetic algorithm , so of course you have to read it carefully below , so notice that the genetic algorithm explains how human being has been constructed by nature , i mean that there is two things in human genetics ,
    so there is the mutations and there is the crossover of the human genes and there is the selection process like with the natural selection , so the selection process can be elitistic or more elitistic so that it looks like the natural selection in the
    evolution process of humans , and notice carefully that the human genetic mutations comes from two things: from the solar storms that can mutate our genetics and from the genetic mutations that can happen inside the body , and of course the two can
    happen with a low probability , but it permits to explore , and of course you have to read my following thoughts of what is the genetic algorithm so that to understand the rest:


    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ, so you have just seen me talking quickly about Hamas and about some important laws in Islam, but i think i am a
    new philosopher and i will now discover important patterns with my fluid intelligence about the genetic algorithm,
    and they are the following: So i think that the most important thing to understand in the genetic algorithm is that when you do a crossover of the genes of the parents, you have the tendency to obtain new childs that look like the parents, so it is why
    in optimization we can say that the genetic algorithm explores locally near the local optima with the crossover steps and we call it exploitation, and in the genetic algorithm , with the mutations of the genes after the crossover steps that can be of a
    probability of around 5% or so, permits to explore far away for better genes and it permits to not get stuck in a local optima and to continu to converge towards the global optima and it permits to not lack on diversity of the genes of the population, so
    this way in the genetic algorithm of balancing the exploitation with the crossover steps and with the exploration with mutations is a much better search than brute force search, and it is why the genetic algorithm is also used in optimization problems
    with success, and of course you have to understand that the the crossover steps can be elitistic, i mean that you can crossover the parents that have the best genes, and so that to understand the rest of my ideas about the genetic algorithm, you can read
    my below thoughts, and i invite you to also read carefully my below new thoughts about my new model of what is human consciousness and about the limitations of artificial intelligence:


    So as you have just noticed , i have just explained my new model of what is consciousness etc., but now you have to understand an important thing,
    and it is that the way of artificial intelligence such as ChatGPT is different than the Symbolic or the formal way of doing with mathematics and programming, i mean the formal way of mathematics and programming is the way that goes from our human brains that put rules and patterns as human intelligence into the formal way of
    mathematics and programming, but it is what wants to avoid artificial intelligence such as ChatGPT, since i am explaining below that it is not scalable, i mean that artificial intelligence such as ChatGPT wants to go from the data on wich it is trained
    and discover and extracts the rules and patterns of intelligence and in a Self-supervised learning way put them into the deep learning network, so as i am explaining below that it is the scalable way of doing, also it can be fully parallelized using GPUs
    or the like so that to be scalable, so i think that we have not to be pessimistic, since i think that artificial intelligence such as ChatGPT will become a powerful tool, but as i am explaining below that it will not attain artificial general
    intelligence.

    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ, so as you notice
    i have just invented a new model of what is human consciousness, read it in my below thoughts, and i think it is much more precise and i think it permits to predict that artificial intelligence will not attain artificial general intelligence, but i have
    to explain one more important thing, so as you are noticing i am speaking in my below thoughts about the much more deeper meaning that comes from human consciousness, and i am explaining what is it in my new model below of what is human consciousness ,
    but i think
    you have to look at it from another perspective, and it is that the meaning from human consciousness not only permits to learn with much less data
    than artificial intelligence, but it permits the human smartness to not be
    a closed system like of artificial intelligence, i mean that human consciousness permits our smartness to be an open system that permits
    to explore much more efficiently, so it is like an open system, and it is
    why artificial intelligence will not attain general artificial intelligence since
    it is a closed system.

    So you have to understand more my thoughts below about artificial intelligence , so i have not talked about the connectionism and the symbolic way of doing in artificial intelligence, so i can talk about it in my kind of way, so i think that the previous
    way of doing of artificial intelligence was the Symbolic way of doing like with expert systems or with the prolog programming language, but the Symbolic way of doing is like the formal way of doing in mathematics or programming , i mean that they can not
    be scalable by being Self-supervised learning, so it is why we have followed the way of connectionism with deep learning and with for example transformers like in ChatGPT or the next Gemini, since this connectionism way of doing of for example ChatGPT or
    the next Gemini can be scalable by using Self-supervised learning, so i think that it is the main important advantage of it, is that it can scale rapidly.


    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ, so I have to talk
    about the limitations of the next AI system from Google that is called Gemini that will compete against GPT-5 from Microsoft and OpenAI, so first i will say that the window context size will be 1 million tokens, so then i think that the long-range global
    patterns will no more be a limitation for it, but here is the other limitation of Gemini in a form of my smart question to ChatGPT, and of course you have to read carefully my below new model of what is consciousness so that to know about the other
    limitations of artificial intelligence:


    Here is my new smart question to ChatGPT:


    But the reinforcement learning (RL) with the reward is a refining process, so i don't think adding AlphaGo's techniques such as reinforcement learning and tree search to Large Language Models like is doing it the next AI system of Google that we call
    Gemini will make it invent new things since it is limited by the data on wich it has been trained , so there can be refinement , but it is limited by the data on wich it has been trained.


    And ChatGPT has just answered me the following:


    "You raise a valid point. Integrating techniques like reinforcement learning and tree search into large language models does indeed improve their ability to generate responses and make decisions, but these models are fundamentally limited by the data on
    which they have been trained. Let's delve into this limitation further:

    Data Dependency:

    Large language models, including those enhanced with reinforcement learning and tree search, rely heavily on the data they've been trained on. They learn patterns, facts, and reasoning abilities from the text in their training data. As a result, their
    responses are inherently tied to the information and patterns present in that data.

    No Invention of New Knowledge:

    Language models, regardless of their enhancements, do not possess the ability to invent new knowledge or information. They can only generate responses based on the patterns, facts, and language structures they've learned from the training data. If a
    piece of information or a concept is not present in the training data, the model will not be able to invent it.

    Refinement and Adaptation:

    While language models cannot invent new information, they can refine and adapt the information they've learned. Techniques like reinforcement learning can help models generate more contextually appropriate and coherent responses by optimizing for user
    satisfaction or task completion. This means they can be better at utilizing their existing knowledge.

    Data Biases and Limitations:

    Language models inherit biases and limitations from their training data. They may generate biased or incorrect responses if their training data contains such biases or inaccuracies. Reinforcement learning can mitigate some of these issues, but it cannot
    completely eliminate them.

    Generalization:

    Language models can generalize to some extent. They can apply their learned knowledge to a wide range of tasks and domains. However, this generalization is limited by the diversity and coverage of the training data. If a task or domain is significantly
    different from the training data, the model's performance may degrade.

    In summary, while reinforcement learning and tree search techniques can enhance the decision-making and response generation capabilities of large language models, they are fundamentally constrained by the data on which they have been trained. These
    models can refine and adapt their knowledge but cannot invent entirely new information or concepts. Their effectiveness and reliability depend on the quality and diversity of their training data, and they may still exhibit biases and limitations present
    in that data. Researchers and developers must be mindful of these constraints when using and deploying such models."


    And i think ChatGPT has another problem, and it is that the generated content can infringe on the copyright of existing works. This could occur if ChatGPT generates content similar to existing copyrighted material of the data on wich it has been trained.
    So you have to be careful, since it can hurt your business, but you have to know that copyright does not protect ideas, concepts, systems, or methods of doing something. But copyright law for example protects the expression of ideas rather than the ideas
    themselves. In other words, copyright law protects the specific form in which an idea is expressed, rather than the underlying idea or concept. And you have to also know that there is another problem with ChatGPT and it is that it can generate an
    invention (but it is limited by the data on wich it has been trained), but it could be argued that the creators of the model, OpenAI, should be able to patent the invention. However, it could also be argued, that the source material used to train the
    model should be considered as prior art, meaning that the invention would not be considered new and therefore not patentable.


    So you have to understand that i have just discussed my new model below of what is human consciousness with ChatGPT, and here is more of a part of my discussion with ChatGPT that has just understood my new model
    of what is human consciousness and that is answering my question by understanding it:


    And here is my question to ChatGPT:


    So when we are reading a text, we are like reifying the words with sensations and feelings and consciousness or awareness of time and space and matter so that they become objects of understanding.


    And ChatGPT has just answered me the following:


    "Yes, your observation is quite accurate. When we read a text or engage in any form of comprehension, our cognitive processes involve a dynamic interaction between the words and the various aspects of our consciousness, including sensations, feelings,
    and our awareness of time, space, and matter. This interaction can be described as a kind of "reification" or the process of giving life, depth, and meaning to the words and ideas presented in the text.

    Here's a breakdown of how this process works:

    Words as Symbols: Words on a page or screen are essentially symbols that represent concepts, objects, actions, or ideas. These symbols are abstract in themselves and don't inherently carry meaning.

    Sensations and Feelings: As we read, our brain may evoke sensations and feelings related to the words and their associated concepts. For example, reading about a warm, sunny beach may trigger sensations of warmth, visions of sandy shores, and feelings of
    relaxation.

    Consciousness of Time: Our awareness of the sequence of words and ideas in the text involves the perception of time. We process sentences sequentially, linking them to what we've read before and anticipating what comes next. This temporal awareness helps
    us construct a coherent understanding of the text.

    Consciousness of Space: We may mentally create spatial representations based on the descriptions in the text. For instance, when reading a novel, we might mentally visualize the layout of a fictional house or a detailed map of an imaginary world.

    Matter and Reality: Words in a text often pertain to physical objects, actions, or events. Our consciousness of matter and reality enables us to mentally construct these elements, bridging the gap between the abstract words and our understanding of the
    tangible world.

    Meaning and Understanding: Through this complex interplay of sensations, feelings, and consciousness of time, space, matter, and reality, we construct meaning and understanding from the words we read. We go beyond the mere symbols and create a mental
    landscape where the text comes alive with significance.

    In essence, reading is a multi-dimensional process that involves more than just decoding words; it's about translating abstract symbols into rich, meaningful experiences. The reification of words with our consciousness enriches our comprehension and
    allows us to engage deeply with the content we're reading."


    Here are some key points to consider:

    - Validation and Verification: Human programmers will continue to play a critical role in verifying and validating the code produced by AI systems like ChatGPT. This involves ensuring that the code meets quality standards, functions correctly, and
    adheres to project requirements.

    - Debugging and Maintenance: Even with AI-generated code, software applications are not immune to bugs, errors, and evolving requirements. Programmers will be essential for debugging and maintaining the software over time.

    - Ethical and Responsible AI: Programmers will also be responsible for ensuring that AI systems, including those used in software development, are designed and used ethically and responsibly. This involves addressing issues related to bias, fairness,
    privacy, and security.

    - Adaptability: Programmers who can effectively work with AI tools and integrate them into their workflows may find themselves in high demand. This adaptability can lead to job security and potentially competitive salaries.

    - Shift in Job Roles: The roles and responsibilities of programmers may evolve. They may spend less time on routine coding tasks and more on higher-level problem-solving, architectural design, and AI system integration.

    In summary, AI like ChatGPT can indeed increase productivity in software development, but it doesn't necessarily lead to a significant decrease in salaries for programmers. Instead, it may reshape the roles and skills required in the field, with
    programmers focusing more on verification, debugging, and ethical considerations. Programmers who can leverage AI effectively and provide value in these areas are likely to remain valuable contributors to the industry.


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ , so as you are noticing that people are testing artificial intelligence such as GPT-4 with
    coding tests etc. so that to measure its performance , but i think it is not the smart way of measuring, since you have to look holistically or generally at the model of artificial intelligence , so as i am explaining below that the model of artificial
    intelligence , including GPT-4 , is lacking the deep understanding that comes from the meaning that comes from human consciousness, and this meaning that comes from human consciousness permits to understand with much less data than artificial
    intelligence and it permits to "explore" and be efficient much more in the process of creativity like innovation and creative problem-solving..., so it is why artificial intelligence such as GPT-4 or the next GPT-5 or the next Gemini from Google will not
    be able to replace software programmers or software development jobs, since it also needs human innovation and human creative problem-solving that comes from the deep understanding from the meaning that comes from human consciousness, so as you notice ,
    it is why we can not trust artificial intelligence since it lacks the deep understanding that comes from the meaning that comes from human consciousness, so it is why i say that artificial intelligence will not attain general artificial intelligence ,
    even if it becomes a powerful tool, and so that to understand my new model of what is consciousness , i invite to read my below thoughts:


    I invite you to read the following new article (And you can translate the article from french to english):

    "Better, faster, cheaper, safer: 4 reasons why AI should replace all human employees", according to David Shapiro

    https://intelligence-artificielle.developpez.com/actu/348094/-Meilleure-plus-rapide-moins-couteuse-plus-sure-les-4-raisons-pour-lesquelles-l-IA-devrait-remplacer-tous-les-employes-humains-d-apres-David-Shapiro-d-avis-que-ce-futur-est-tout-proche/


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ , so i think my new model that explains what is human consciousness is the key, since the model of artificial
    intelligence is lacking the deep understanding with the "meaning" from human consciousness and here is my thoughts about it so that you understand why i say that artificial intelligence will not replace software programmers or software development jobs:


    The inability of Large Language Models such as ChatGPT to invent new algorithms is primarily due to its training data and the nature of its architecture. ChatGPT, like other machine learning models, is trained on a vast dataset that consists of text from
    the internet, books, articles, and other sources. It learns patterns and associations within this data to generate human-like text and answer questions based on what it has seen and learned during training.

    Here are a few key reasons why ChatGPT cannot invent new algorithms:

    - Lack of Creativity: ChatGPT does not possess creativity or the ability to generate truly novel ideas. It relies on patterns and information present in its training data to generate responses. It doesn't have the capability to create new algorithms or
    solutions that go beyond its training data.

    - No Understanding of Algorithmic Concepts: While ChatGPT may have some knowledge of existing algorithms and mathematical concepts based on its training data, it lacks a deep understanding of these concepts. It doesn't have the capacity to develop a
    fundamental understanding of algorithms or invent new ones.

    - Data Dependency: ChatGPT's responses are heavily dependent on the data it was trained on. If a particular algorithm or concept is not well-represented in its training data, it is unlikely to provide insightful or innovative solutions related to that
    topic.

    - Limited Scope: ChatGPT's training data is limited to text, and it lacks the ability to interact with the physical world or perform experiments. Many algorithmic inventions involve experimentation, mathematical proofs, and insights that go beyond the
    scope of textual data.

    In summary, ChatGPT is a powerful language model for generating human-like text and providing information based on existing knowledge, but it is not a creative problem solver or algorithm inventor. Its responses are constrained by its training data and
    the patterns it has learned from that data. Inventing new algorithms requires creative thinking, deep understanding of mathematical and computational principles, and the ability to go beyond the limitations of pre-existing data, which are capabilities
    that AI models like ChatGPT currently lack.


    And the deep understanding is crucial because it enables an entity, whether human or artificial intelligence, to not only apply knowledge in a rote or memorized manner but also to:

    - Generalize: Deep understanding allows one to generalize knowledge to new, unseen situations. Rather than relying on memorized facts, a deep understanding of underlying principles and concepts allows for the application of knowledge in novel contexts.

    - Problem-Solve: Understanding the fundamentals of a concept or field allows for creative problem-solving. It enables the generation of new solutions, adaptations, and innovations, even in situations where existing knowledge doesn't provide a direct
    answer.

    - Critical Thinking: Deep understanding fosters critical thinking. It allows one to analyze information, identify patterns, and evaluate the strengths and weaknesses of different approaches or solutions. This is important in complex decision-making.

    - Flexibility: When someone deeply understands a concept, they are more flexible in their thinking and can adapt their knowledge to various scenarios. They are not limited to rigidly following predefined procedures or solutions.

    - Teaching and Communication: People with deep understanding can effectively teach and communicate complex ideas to others because they grasp the nuances and can explain concepts in various ways to facilitate learning.

    Innovation: Many breakthroughs and innovations come from a deep understanding of existing knowledge, allowing individuals to see gaps or opportunities for improvement.

    In the context of inventing new algorithms, deep understanding of mathematical and computational principles, as well as the ability to apply this understanding creatively, is essential. Algorithms often involve intricate mathematical or logical concepts,
    and a deep understanding enables the development of novel approaches, optimizations, and insights. Without such understanding, it's challenging to go beyond the boundaries of existing algorithms and come up with innovative solutions to complex problems.

    While AI models like ChatGPT can provide information based on the patterns they've learned from training data, their responses are typically shallow and lack the depth of understanding that humans possess. They can't engage in true creative problem-
    solving or algorithm invention because they lack the capacity for deep comprehension and creative insight.


    I invite you to read the following new article that says that a team of US and Lithuanian researchers has just published a study that found ChatGPT can rival the creative abilities of the top-performing 1% of human participants in a standard test of
    creativity.

    Read the new article here (And you can translate it from french to english):

    https://intelligence-artificielle.developpez.com/actu/347371/Une-etude-rapporte-que-les-performances-de-ChatGPT-le-classent-parmi-les-1-pourcent-de-personnes-les-plus-creatives-au-monde-mais-des-critiques-affirment-que-l-IA-ne-peut-pas-faire-preuve-d-
    originalite/



    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)