• More of my philosophy about HBM3 and about technology and about artific

    From Amine Moulay Ramdane@21:1/5 to All on Thu Apr 20 14:56:33 2023
    Hello,




    More of my philosophy about HBM3 and about technology and about artificial intelligence and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..



    I invite you to read the following new article:

    SK Hynix Samples 24GB HBM3 Modules: Up to 819 GB/s

    https://www.tomshardware.com/news/sk-hynix-samples-24-gb-hbm3-modules



    So, SK Hynix Inc. is a South Korean supplier of dynamic random-access memory chips and flash memory chips. Hynix is the world's second-largest memory chipmaker and the world's third-largest semiconductor company.


    So, HBM3 offers several enhancements over HBM2E, most notably the doubling of bandwidth from HBM2E at 3.6 Gbps up to 6.4Gbps for HBM3, or 819 GBps of ​​bandwidth per device.


    So, i think that increasing memory capacity is getting more difficult, since memory is coming under Moore’s Law pressures, since making increasingly dense as well as increasingly faster memory is getting more and more difficult, and hence the price of
    memory has not come down as much. Other than that, i think that High Performance Linpack (HPL) matrix math test is not particularly memory bound. But the High Performance Conjugate Gradients (HPCG) and Stream Triad benchmarks are memory bound, and the
    HPCG benchmark generates a synthetic discretized three-dimensional partial differential equation model problem, and computes preconditioned conjugate gradient iterations for the resulting sparse linear system. So, we don’t yet know what the HBM2e
    memory on Intel Sapphire Rapids CPU will cost, so if the bandwidth of HBM2e is 4X more on Intel Sapphire Rapids CPU , i think that it is not enough for supercomputers, since the Intel Sapphire Rapids CPU costs 3X more, so i think it is why HBM3 that
    doubles the bandwidth from HBM2E is i think an interesting option and it is why i think that the next Intel Sapphire Rapids CPU has to support HBM3 so that it be an interesting option for supercomputers too.


    More of my philosophy about the future Apps and about artificial intelligence and more of my thoughts..



    I have just looked at the following new video of techlead that i know, and i invite you to look at it:

    Why ChatGPT AI Will Destroy Programmers.

    https://www.youtube.com/watch?v=U1flF5WOeNc



    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think that the above techlead in the above video is not thinking correctly since he is saying that software
    programming is dying, since i say that software programming is not dying, since the future Apps are for example Metaverse, and of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s
    Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what

    And also the other future Apps are for example the one that use data to be smart in real time (Read about it here: https://www.cxotoday.com/cxo-bytes/data-driven-smart-apps-are-the-future/ ) , and i also say that software programming is not dying since
    GPT-4 and such artificial intelligence will replace just a small percentage of software programmers, since software programming also needs to care about accuracy and reliability, so you have to look at the following most important limitations of GPT-4
    and such artificial intelligence so that to notice it:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.


    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    And i invite you to read the following article so that to understand more about GPT-4:

    Exploring the Limitations and Potential of OpenAI’s GPT-4

    https://ts2.space/en/exploring-the-limitations-and-potential-of-openais-gpt-4/



    And more of my philosophy about the objective function and about artificial intelligence and about my philosophy and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, i think i am understanding more GPT-4 with my fluid intelligence, so
    i think that GPT-4 uses Deep learning and it uses the mechanism of self-attention so that to understand the context and it uses Reinforcement learning from human feedback that uses a reward mechanism so that
    to learn from the feedback of the people that are using GPT-4 so that
    to ensure that this or that data is truth or not etc. but i think that
    the problem of GPT-4 is that it needs a lot of data and it is the first weakness, and it is dependent on the data and the quality of data and it is the second weakness of GPT-4, so in unsupervised learning that is
    used so that to train GPT-4 on the massive data, the quality of the data is not known with certitude, so it is a weakness of artificial intelligence such as GPT-4, and about the objective function that guides, so i think that it is the the patterns that
    are found by the neural network and that are learned by the neural network of GPT-4 that play the role of the objective function that guides, so the objective function comes from the massive data on wich GPT-4 has been trained, and i think it is also a
    weakness of GPT-4, since i think that what is missing is what explains my new model of what is consciousness , since the meaning from human consciousness also plays the role of the objective function , so it makes it much better than artificial
    intelligence and it makes it that it needs much less data, so it is why the human brain needs much less data than artificial intelligence such as GPT-4. So i invite you to read my following previous thoughts so that to understand my views:



    More of my philosophy about artificial intelligence such as GPT-4 and about my philosophy and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just looked more carefully at GPT-4 , and i think that as i have just explained it, that it will become
    powerful, but it is limited by the data and the quality of the data on wich it has been trained, so if it encounter a new situation to be solved and the solution of it can not be inferred from the data on wich it has been trained, so it will not be
    capable of solving this new situation, so i think that my new model of what is consciousness is explaining that what is lacking is the meaning from human consciousness that permits to solve the problem, so my new model is explaining that artificial
    intelligence such as GPT-4 will not attain artificial general intelligence or AGI, but eventhough , i think that artificial intelligence such as GPT-4 will become powerful, so i think that the problematic in artificial intelligence is about the low level
    layers, so i mean look at assembler programming language, so it is a low level layer than high level programming languages, but you have to notice that the low level layer of assembler programming language can do things that the higher level layer can
    not do, so for example you can play with the stack registers and low level hardware registers and low level hardware instructions etc. and notice how the low level layer like assembler programming can teach you more about the hardware, since it is really
    near the hardware, so i think that it is what is happening in artificial intelligence such as the new GPT-4, i mean that GPT-4 is for example trained on data so that to discover patterns that make it more smart, but the problematic is that this layer of
    how it is trained on the data so that to discover patterns is a high level layer such as the high level programming language, so i think that it is missing the low level layers of what makes the meaning, like the meaning of the past and present and the
    future or the meaning of space and matter and time.. from what you can construct the bigger meaning of other bigger things, so it is why i think that artificial intelligence will not attain artificial general intelligence or AGI, so i think that what is
    lacking in artificial intelligence is what is explaining my new model of what is consciousness, so you can read all my following thoughts in the following web link so that to understand my views about it and about different other subjects:


    https://groups.google.com/g/alt.culture.morocco/c/QSUWwiwN5yo



    And I think that there is a law called Eroom’s law that says that the cost of new medical drug roughly doubles every nine years, so i think that Eroom’s law comes from the basis idea of the fact that it takes 80% of the effort to solve the last 20%
    of the problems, but of course we have not to be pessimistic about it, since i invite you to read the following very interesting article of the following scientist that is talking about it and that is giving the reasons to be optimistic about it:


    The exponential cost of progress

    https://lemire.me/blog/2015/08/10/the-exponential-cost-of-progress/


    More of my philosophy about HP and about the Tandem team and more of my thoughts..


    I invite you to read the following interesting article so that
    to notice how HP has been smart by also acquiring Tandem Computers, Inc.
    with there "NonStop" systems and by learning from the Tandem team
    that has also Extended HP NonStop to x86 Server Platform, you can read about it in my below writing and you can read about Tandem Computers here: https://en.wikipedia.org/wiki/Tandem_Computers , so notice that Tandem Computers, Inc. was the dominant
    manufacturer of fault-tolerant computer systems for ATM networks, banks, stock exchanges, telephone switching centers, and other similar commercial transaction processing applications requiring maximum uptime and zero data loss:


    Read more here:

    https://www.zdnet.com/article/tandem-returns-to-its-hp-roots/

    More of my philosophy about HP "NonStop" to x86 Server Platform fault-tolerant computer systems and more..

    Now HP to Extend HP NonStop to x86 Server Platform

    HP announced in 2013 plans to extend its mission-critical HP NonStop technology to x86 server architecture, providing the 24/7 availability required in an always-on, globally connected world, and increasing customer choice.

    Read the following to notice it:

    https://www.ciol.com/hp-extend-hp-nonstop-x86-server-platform/


    And today HP provides HP NonStop to x86 Server Platform, and here is
    an example, read here:

    https://www.hpe.com/ca/en/pdfViewer.html?docId=4aa5-7443&parentPage=/ca/en/products/servers/mission-critical-servers/integrity-nonstop-systems&resourceTitle=HPE+NonStop+X+NS7+%E2%80%93+Redefining+continuous+availability+and+scalability+for+x86+data+sheet



    More of my philosophy about Crucial T700 SSD Preview...


    Crucial from USA is the only brand whose parent company, Micron
    (Read more here about it: https://www.micron.com/), innovated the NAND2 inside the T700 Gen5 SSD. Micron’s 45-year reputation for industry innovation and leadership backs up the end-to-end quality, reliability, superior testing and OEM qualification in
    every Crucial SSD.


    And Crucial T700 SSD Preview: Fastest Consumer SSD Hits 12.4 GB/s


    Read more here:

    https://www.tomshardware.com/features/crucial-t700-ssd-preview-fastest-consumer-ssd-hits-124-gbs


    And read more here:

    https://www.crucial.com/products/ssd/crucial-t700-ssd





    Thank you,
    Amine Moulay Ramdane.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Man of Your dreams@21:1/5 to All on Fri Apr 28 10:57:28 2023
    Tere õhtust, hobune!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)