Hello,
More of my philosophy about Wait-free Eras and Hazard Eras and more..i think the Wait-free Eras also does loop a constant number of time Bounded by the number of threads that make it Wait-free, like in the get_protected() function in the source code above of the Wait-free Eras, so i think that it consumes too much energy,
I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..
As you have just noticed i have just quickly read a PhD paper below,
and i have also just read two other PhD papers about Wait-free Eras and Hazard Eras, here they are, read them carefully:
Here is the PhD paper about Wait-free Eras:
https://arxiv.org/pdf/2001.01999.pdf
And here is the PhD paper of Hazard Eras:
https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/hazarderas-2017.pdf
And they are two new memory reclamations, but i have just read the PhD papers and i am finding two defects or disadvantages, and it is that
you have to "fix" the number of threads so that the algorithms work, so it is not good and it is not flexible, and they are not energy efficient since the lock-free Hazard Eras does loop like in Lockfree algorithms, so it consumes a lot of energy, and
VBR: Version Based Reclamation
https://www.youtube.com/watch?v=uWXXNWNwr-w
And notice the following new algorithm from a PhD researcher called Onefile that is a Wait-free persistent Transactional memory here:
https://github.com/pramalhe/OneFile/blob/master/OneFile-2019.pdf
So the disadvantages of Onefile is that it is not energy efficientexpressiveness, as they cannot process input sequentially they can not model hierarchical structures and recursion, and hierarchical structure is widely thought to be essential to modeling natural language, in particular its syntax, so i think that
as the above Wait-Free Eras algorithm, also the code is much more difficult and complex than using the much simpler and the much easy locking algorithms.
More of my philosophy about transformers limitation and Natural Language Processing (NLP) in artificial intelligence..
I invite you to read the following about Microsoft Megatron-Turing Natural Language Generation (MT-NLP) from NVIDIA:
https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/
I think i am quickly understanding the defects of Megatron-Turing Natural Language Generation (MT-NLP) that is better than GPT-3, and it is that "self-attention" of the transformers in NLP, even if they scale to very long sequences, they have a limited
Read the following paper so that to understand the mathematical proof of it:
https://aclanthology.org/2020.tacl-1.11.pdf
Read my previous thoughts:
More of my philosophy about Natural Language Processing (NLP) in artificial intelligence and more..so that to scale much better, so i think it is the basic ideas of Microsoft Megatron-Turing Natural Language Generation (MT-NLP) below, so i think that it is the way that can make "emerge" in NLP the common sense reasoning and also reading comprehension
I think that the transformers in Natural Language Processing (NLP) use a kind of Deep learning, and Natural Language Processing (NLP)
is a branch of Artificial Intelligence (AI) that enables machines to understand the human language, so i think that the transformers in Natural Language Processing (NLP) are using Pruning + quantization that makes the model much faster and much smaller
The Software GPU: Making Inference Scale in the Real World by Nir Shavit, PhDNicola Ferrier, Argonne senior computer scientist
https://www.youtube.com/watch?v=mGj2CJHXXKQ
More of my philosophy about the benefits of Exascale supercomputers and more..
As you have just noticed i have just posted about the following:
Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance
Read more here:
https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance
But Exascale supercomputers will also allow to construct an accurate map of the brain that allows to "reverse" engineer or understand the brain, read the following so that to notice it:
“If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora, if everything worked beautifully, it could still take 1,000 days.”
Read more here so that to understand:have a broad impact and promise to fundamentally change society, both in the U.S. and abroad.
https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction
Also Exascale supercomputers will allow researchers to tackle problems which were impossible to simulate using the previous generation of machines, due to the massive amounts of data and calculations involved.
Small modular nuclear reactor (SMR) design, wind farm optimization and cancer drug discovery are just a few of the applications that are priorities of the U.S. Department of Energy (DOE) Exascale Computing Project. The outcomes of this project will
Read more here:Exascale computing power, read more here:
https://www.cbc.ca/news/opinion/opinion-exascale-computing-1.5382505
Also the goal of delivering safe, abundant, cheap energy from fusion is just one of many challenges in which exascale computing’s power may prove decisive. That’s the hope and expectation. Also to know more about the other benefits of using
https://www.hpcwire.com/2019/05/07/ten-great-reasons-among-many-more-to-build-the-1-5-exaflops-frontier/Megatron-LM models, MT-NLP contains 530 billion parameters and achieves “unmatched” accuracy in a broad set of natural language tasks, Microsoft and Nvidia say — including reading comprehension, commonsense reasoning, and natural language
And more of my philosophy about the future of humanity:
Read more here:
https://groups.google.com/g/alt.culture.morocco/c/0X024jfzNvM
More of my philosophy about artificial intelligence..
'
AI Generates Hypotheses Human Scientists Have Not Thought Of
Read more here:
https://www.scientificamerican.com/article/ai-generates-hypotheses-human-scientists-have-not-thought-of/
More of my philosophy about artificial intelligence and common sense reasoning..
"Microsoft and Nvidia today announced that they trained what they claim is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MT-NLP). The successor to the companies’ Turing NLG 17B and
Read more here:following thoughts and writing:
https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/
So I think that one hypothesis is that we should be able to build even bigger models, with trillions of parameters or more, and artificial common sense will eventually emerge. Let’s call this the ‘brute-force’ hypothesis.
Read more here so that to notice:
https://towardsdatascience.com/the-quest-for-artificial-common-sense-766af7fce292
Also I invite you to look carefully at the following video of a jewish AI(artificial intelligence) scientist about artificial intelligence(And read about him here: https://rogantribe.com/who-is-lex-fridman/):
Exponential Progress of AI: Moore's Law, Bitter Lesson, and the Future of Computation
https://www.youtube.com/watch?v=Me96OWd44q0
I think that the jewish AI(artificial intelligence) scientist that is speaking on the video above and that is called Lex Fridman is making a
big mistake, since he focuses too much on improving Deep Learning in artificial intelligence using exponential improvement of computation of CPU hardware, but i think that it is a "big" mistake and you can easily notice it by reading carefully my
More of my philosophy about artificial intelligence and specialized hardwares and more..Nir Shavit that is a jewish from Israel has just invented a very interesting software called neural magic that does it efficiently, and i invite you to look at the following very interesting video of Nir Shavit to know more about it:
I think that specialized hardwares for deep learning in artificial intelligence like GPUs and quantum computers are no more needed, since you can use only a much less powerful CPU with more memory and do it efficiently, since a PhD researcher called
The Software GPU: Making Inference Scale in the Real World by Nir Shavit, PhDintelligence, and here it is:
https://www.youtube.com/watch?v=mGj2CJHXXKQ
And there is not only the jewish above called Nir Shavit that has invented a very interesting thing, but there is also the following muslim Iranian and Postdoctoral Associate that has also invented a very interesting thing too for artificial
Why is MIT's new "liquid" AI a breakthrough innovation?experience-du-monde%2F
Read more here:
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fintelligence-artificielle.developpez.com%2Factu%2F312174%2FPourquoi-la-nouvelle-IA-liquide-de-MIT-est-elle-une-innovation-revolutionnaire-Elle-apprend-continuellement-de-son-
And here is Ramin Hasani, Postdoctoral Associate (he is an Iranian):
https://www.csail.mit.edu/person/ramin-hasani
And here he is:
http://www.raminhasani.com/
He is the study’s lead author of the following new study:
New ‘Liquid’ AI Learns Continuously From Its Experience of the World
Read more here:
https://singularityhub.com/2021/01/31/new-liquid-ai-learns-as-it-experiences-the-world-in-real-time/
And here is my thoughts about artificial intelligence and evolutionary algorithms in artificial intelligence:
https://groups.google.com/g/alt.culture.morocco/c/P9OTDTiCZ44
Thank you,
Amine Moulay Ramdane.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 217:57:36 |
Calls: | 6,621 |
Calls today: | 3 |
Files: | 12,171 |
Messages: | 5,317,713 |