Hello,
More of my philosophy about Machine programming and about oneAPI from Intel company..
I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..
I will say that when you know C and C++ moderately, it will not be so difficult to program OpenCL(Read about OpenCL here: https://en.wikipedia.org/wiki/OpenCL) or CUDA, but the important question is what is the difference between FPGA and GPU ? so iinvite you to read the following interesting paper about GPU vs FPGA Performance Comparison:
https://www.bertendsp.com/pdf/whitepaper/BWP001_GPU_vs_FPGA_Performance_Comparison_v1.0.pdfand CUDA programming is not so difficult, and as you will notice below that oneAPI from Intel permits you to program FPGA in a higher level manner, but here again from the paper above we can notice that GPU is the good way when you want performance and
So i think from this paper above that GPU is the good way when you
want performance and you want too cost efficiency.
So i think that the following oneAPI from Intel company that wants with it to do all the heavy lifting for you, so you can focus on the algorithm, rather than on writing OpenCL calls, is not a so smart way of doing, since as i said above that OpenCL
Here is the new oneAPI from Intel company, read about it:reasoning.
https://codematters.online/intel-oneapi-faq-part-1-what-is-oneapi/
And now i will talk about another interesting subject and it is
about the next revolution in the software industry that is Machine programming, so i invite you to read carefully the following new article about it:
https://venturebeat.com/2021/06/18/ai-weekly-the-promise-and-limitations-of-machine-programming-tools/
So i think that Machine programming will be limited to AI-powered assistants that is not so efficient, since i think that connectionism
in artificial intelligence is not able to make emerge common sense reasoning, so i invite you to read my following thoughts about it
so that to understand why:
More of my philosophy about the limit of the connectionist models in artificial intelligence and more..
I think i am smart and i will say that the connectionist model like
of deep learning has not the same nature as of the human brain, since
i can say that the brain is not just connections of neurons like
in deep learning, but it is also a "sense" like the sense of touch,
and i think that this sense of the brain is biologic,
and i think that this kind of nature of the brain of being
also a sense is giving the emergence of consciousness and self-awareness and a higher level of common sense reasoning, this
is why i think that the connectionist model in artifical intelligence is showing its limits by not being able to make emerge common sense reasoning, but as i said below that the hybrid connectionist + symbolic model can make emerge common sense
And here is what i said about human self-awareness and awareness:this kind of reification with the human senses is an emergence that comes from the human biology, so i am smart and i will say that the brain is a kind of calculator that calculates by using composability with the meanings that come also from this kind
So i will start by asking a philosophical question of:
Is human self-awareness and awareness an emergence and what is it ?
So i will explain my findings:
I think i have found the first smart pattern with my fluid intelligence and i found also the rest and it is the following:
Notice that when you touch a cold water you will know about the essence
or nature of the cold water and you will also know that it is related
to senses of humans, so i think that the senses of a human give life
to ideas, it is like a "reification" of an idea, i mean that an idea
is alive since it is like reified with the senses of humans that senses time and space and matter, so this reification gives the correct meaning since you are like reifying with the human senses that gives the meaning, and i say that this capacity of
this kind of reification with the human senses that render the brain much more optimal than artificial intelligence, and i will explain moreMegatron-LM models, MT-NLP contains 530 billion parameters and achieves “unmatched” accuracy in a broad set of natural language tasks, Microsoft and Nvidia say — including reading comprehension, commonsense reasoning, and natural language
the why of it in my next posts.
More of my philosophy about the future of artificial intelligence and more..
I will ask a philosophical question of:
Can we forecast the future of artificial intelligence ?
I think i am smart, and i am quickly noticing that connectionism in artificial intelligence like with deep learning is not working because it is not able to make emerge common sense reasoning, so i invite you to
read the following article from ScienceDaily so that to notice it, since it is speaking about the connectionist models(like the ones of deep learning or the transformers that are a kind of deep learning) in artificial intelligence:
https://www.sciencedaily.com/releases/2020/11/201118141702.htm
Other than that the new following artificial intelligence connectionist models like from Microsoft and NVIDIA that are better than GPT-3
has the same weakness , since i think that they can not make emerge
common sense reasoning, here they are:
"Microsoft and Nvidia today announced that they trained what they claim is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MT-NLP). The successor to the companies’ Turing NLG 17B and
Read more here:expressiveness, as they cannot process input sequentially they can not model hierarchical structures and recursion, and hierarchical structure is widely thought to be essential to modeling natural language, in particular its syntax, so i think that
https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/
Because i also said the following:
I think i am quickly understanding the defects of Megatron-Turing Natural Language Generation (MT-NLP) that is better than GPT-3, and it is that "self-attention" of the transformers in NLP, even if they scale to very long sequences, they have a limited
Read the following paper so that to understand the mathematical proof of it:
https://aclanthology.org/2020.tacl-1.11.pdf
So i think that the model that will have much more success to or can make emerge common sense reasoning is like the following hybrid model in
artificial intelligence of connectionism + symbolism that we call COMET, read about it here:
Common Sense Comes Closer to Computers
https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/
And here is what i also said about COMET:
I have just read the following article about neuroevolution
that is a meta-algorithm in artificial intelligence, an algorithm for designing algorithms, i invite you to read about it here:
https://www.quantamagazine.org/computers-evolve-a-new-path-toward-human-intelligence-20191106/
So notice that it says the following
"In neuroevolution, you start by assigning random values to the weights between layers. This randomness means the network won’t be very good at its job. But from this sorry state, you then create a set of random mutations — offspring neural networks with slightly different weights — and evaluate their abilities. You keep the best ones, produce more offspring, and repeat."
So i think that the problem with neuroevolution above is that the
"evaluate the abilities of the offspring neural networks" lacks common sense.
So read the following interesting article that says that artificial intelligence has also brought a kind of common sense to Computers, and
read about it here:
https://arxiv.org/abs/1906.05317
And read about it in the following article:
"Now, Choi and her collaborators have united these approaches. COMET
(short for “commonsense transformers”) extends GOFAI-style symbolic reasoning with the latest advances in neural language modeling — a kind
of deep learning that aims to imbue computers with a statistical “understanding” of written language. COMET works by reimagining common-sense reasoning as a process of generating plausible (if
imperfect) responses to novel input, rather than making airtight
deductions by consulting a vast encyclopedia-like database."
Read more here:
https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/
Thank you,
Amine Moulay Ramdane.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 87:17:12 |
Calls: | 6,658 |
Files: | 12,203 |
Messages: | 5,333,879 |