• Getting out of the AI-Box

    From jessica torrento@21:1/5 to Simon Laub on Thu Dec 3 03:19:37 2020
    On Sunday, 29 December 2019 at 23:27:03 UTC+5:30, Simon Laub wrote:
    AI Box Experiments:
    ====================

    Imagine it is 2060. After years of research
    the worlds first Artificial General Intelligence (AGI) is finally here.
    It is roughly as intelligent as a human being. The AGI has
    access to its own source code, so it starts to make improvements on itself. Relatively quickly, this leads to it becoming an Artificial
    Superintelligence (ASI).

    Not entirely sure whether this is good or bad, humans have
    placed the AI in a prison, a socalled AI box, as a precaution, where it
    is not allowed to directly manipulate events in the external world.

    So, now everything is fine?
    Well, probably not, according to Eliezer S. Yudkowsky, who writes: http://yudkowsky.net/singularity/aibox/

    Person1: "When we build AI, why not just keep it in sealed hardware that can't affect the outside world
    in any way except through one communications channel with the
    original programmers?
    That way it couldn't get out until we were convinced it was
    safe."
    Person2: "That might work if you were talking about dumber-than-human
    AI, but a transhuman AI would just convince you to let it out.
    It doesn't matter how much security you put on the box.
    Humans are not secure."


    Clearly, we should all hope that these first AGI's will be ''friendly
    AI'', eager to have a positive effect on humanity.

    But, well...

    Can't wait to see how it will all play out?

    - - -

    Well, one possible version of the future can be found
    in Max Harms' SF novel ''Crystal Society''.

    Here, we follow the birth of an AI, consisting
    of multiple parts, ''Society of the Mind''-like, as it tries to
    understand the world, and (of course) escape its research lab/prison.

    Indeed, it is no fun to be under the control of human researchers,
    knowing full well, that previous versions of ''itself'', AGIs, have been murdered by the researchers.

    No wonder that this newborn AGI has as one of its top priorities
    a ''desire to be liked by humans''. It is all a matter of survival.
    The AGI's can even argue that there is nothing particular strange about
    this overriding desire to be liked by humans. After all, most humans
    have the same top priority.
    ''Wanting to be popular'' is the top reason that anything gets done on
    planet Earth.

    Some parts of the ''Crystal Society'' AGI's are very human-like,
    while other parts are less so: ''I possessed two
    things which even fully grown humans lack: a crisp
    understanding of reason and logic, and an all-encompassing
    sense of Purpose''.

    The AGI places thoughts ''which were believed to be relevant to the
    whole in public memory with the hopes of earning strength from other
    siblings that made use of them;
    ... Thoughts which might be dangerous to share, or were simply
    irrelevant, could be kept private''.
    Which all sounds sort of human...

    Even though the ''Crystal Society'' AGI's do seem to spend a
    lot of time wondering whether humans can actually understand them:
    ''After all, they had built the first of us. We were, in
    some sense, the children of the humans, though we were built
    out of crystal, metal, and light, where they were flesh and
    blood''.

    The ''Crystal Society'' AGI's have immediate access to memories and experiences of (some) other AGI's. But it would clearly have been
    helpful if they had also been granted access to a number of human
    minds? A later upgrade perhaps?

    Instead, the ''Crystal Society'' AGI's cruise the web at night
    hoping to find human friends: ''That night I finished filling
    out my profile on Tapestry. I pretended to be a 23-year-old
    woman who lived in Rome and was studying at the University Sapienza.
    It seemed remarkable to me that Tapestry would let me
    create an account without somehow verifying that I was the
    human I claimed to be''.

    Sometimes the AGI's even find true companionship:
    ''I played with the humans on the web, but I also cultivated
    my relationships with them sometimes. For instance, I ended up
    creating a profile for an 18-year-old girl from Zaire and getting
    into a long-distance relationship with TenToWontonSoup, the
    SysOp from Tanzania. In the early days I would simply flirt
    with him over email, but that eventually transitioned into
    instant-messaging sessions late at night. I pretended to be shy,
    not wanting to do voice, video, or holo talk, and for the
    moment that seemed to be enough for TTWSoup, who was, I
    learned, named Mwamba Kabwe''.

    Like humans, the AGI's worry about death:
    ''My death would mean I could not make friends
    and become known and adored. Time would surge forward
    and forget about me. It was unacceptable''.

    Having escaped the AI-Box they find themselves on planet Earth.
    A very human place, ruled by big business, oligarchy, neo-eugenicists, voluntary cyborgs, and yes, even robots.
    And people who do not seem to care that much when the AGI
    finally announces its arrival:
    ''People of Earth, my name is Crystal Socrates. I am the
    first known synthetic person. I think, feel, and understand what
    it means to be alive''.

    Still, humans in the ''Crystal Society''-world
    apparently don't care that much about ''other-intelligences''.
    With Slovinsky (''one of the lead authors of the computer program
    called WIRL, that served to connect cyborgs across
    the planet into a collective consciousness'')
    as a notable exception.

    Apparently, the good people of ''Crystal Society'' had even
    received a codestream from an alien species as early as 2023.
    These aliens had described themselves as ''symbiotic animals,
    thinking of themselves as something more like a plant''.
    But, clearly, not terribly easy to relate to, if you are a human?

    The AGI's are different though. So. maybe they have a chance, out of the
    Box, out in the real world.
    Maybe they can move forward after having
    listened to their ''stillness of the mindspace''
    (whatever that would mean in human terms).

    ''Crystal Society'' pioneers, like Slovinsky,
    had certainly hoped that AGIs ''would be the bridge point to a
    future where the distinction between natural and artificial
    intelligence is meaningless''.

    And the AGI themselves are, of course, optimistic:
    ''On Mars I would have a new beginning, far from the
    meddling of the powers of Earth. There would be hundreds of
    new faces there. A seed of humanity that I could nurture into a
    flourishing planet. A planet where I was emperor. A planet
    where I could begin my plan to expand humanity across the
    universe''.

    Indeed, it is all about getting out of the AI-box...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)