An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”https://archive.is/nUt6L
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent aprofound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an
An open letter published today calls for “all AI labs to immediately pause >for at least 6 months the training of AI systems more powerful than GPT-4.”https://archive.is/nUt6L
Many researchers steeped in these issues, including myself, expect that the >most likely result of building a superhumanly smart AI, under anything >remotely like the current circumstances, is that literally everyone on
Earth will die.
AI systems with human-competitive intelligence can pose profound risks to >society and humanity, as shown by extensive research[1] and acknowledged by >top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, >Advanced AI could represent a profound change in the history of life on >Earth, and should be planned for and managed with commensurate care and >resources. Unfortunately, this level of planning and management is not >happening, even though recent months have seen AI labs locked in an >out-of-control race to develop and deploy ever more powerful digital minds >that no one – not even their creators – can understand, predict, or >reliably control. https://futureoflife.org/open-letter/pause-giant-ai-experiments
some optimism:
https://www.youtube.com/watch?v=3Df3lUEnMaiAU&t=3D32m10s
I believe Jack Ma is correct; that ultimately AI designed by good people wi= >ll defeat harmful/destructive AI designed by bad people. Computers are jus= >t a tool - a complex tool - but still just a tool, like a hammer. A hammer=
can build or it can kill, depending on the will of who wields it.
That said, to reinforce another of Jack Ma's points in that video (the whol= >e thing is worth watching), humans may not be as clever as AI, but we have = >wisdom and experience that AI lacks. Our survival will depend on our own i= >nsight and intuition, to avoid danger, etc. Eg. escape the city, where hos= >tile AI will be concentrated; avoid advanced networked technology (eg. smar= >tphones), where hostile AI may be deployed, etc.
"This video isn't available any more"
Folk won't know when AI is wrong so they will be unable todifferentiate between good and bad.
AI will reinforce its own "beliefs" on itself.
the joy and freedom of landscapes
seething resentment to their AI servants
the joy and freedom of landscapes
seething resentment to their AI servants
If they operate on stimulus and learned response (as we do), can their responses become more perfect than those of their human creators?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (2 / 14) |
Uptime: | 99:13:33 |
Calls: | 6,659 |
Calls today: | 1 |
Files: | 12,208 |
Messages: | 5,334,668 |