• Artificial Intelligence Shows Why Atheism Is Unpopular

    From Brewster@21:1/5 to All on Sun Jul 29 13:42:43 2018
    XPost: alt.atheism, sac.politics, sac.politics
    XPost: alt.politics.democrats, alt.politics.homosexuality

    Imagine you’re the president of a European country. You’re slated to
    take in 50,000 refugees from the Middle East this year. Most of them
    are very religious, while most of your population is very secular. You
    want to integrate the newcomers seamlessly, minimizing the risk of
    economic malaise or violence, but you have limited resources. One of
    your advisers tells you to invest in the refugees’ education; another
    says providing jobs is the key; yet another insists the most important
    thing is giving the youth opportunities to socialize with local kids.
    What do you do?

    Well, you make your best guess and hope the policy you chose works
    out. But it might not. Even a policy that yielded great results in
    another place or time may fail miserably in your particular country
    under its present circumstances. If that happens, you might find
    yourself wishing you could hit a giant reset button and run the whole experiment over again, this time choosing a different policy. But of
    course, you can’t experiment like that, not with real people.

    You can, however, experiment like that with virtual people. And that’s
    exactly what the Modeling Religion Project does. An international team
    of computer scientists, philosophers, religion scholars, and others
    are collaborating to build computer models that they populate with
    thousands of virtual people, or “agents.” As the agents interact with
    each other and with shifting conditions in their artificial
    environment, their attributes and beliefs—levels of economic security,
    of education, of religiosity, and so on—can change. At the outset, the researchers program the agents to mimic the attributes and beliefs of
    a real country’s population using survey data from that country. They
    also “train” the model on a set of empirically validated
    social-science rules about how humans tend to interact under various
    pressures.

    MORE STORIES
    A wish ritual constructed by Ritual Design Lab was featured in San
    Francisco's Market Street Prototype Festival in 2015.
    A Design Lab Is Making Rituals for Secular People
    SIGAL SAMUEL
    Syrian girl Noor, famous for broadcasting clips on social media about
    the regime-bombardments on the former rebel-held town of Jobar in
    Eastern Ghouta, arrives in Qalaat al-Madiq after being evacuated from
    Arbin on March 30, 2018.
    What If There Is No Ethical Way to Act in Syria Now?
    SIGAL SAMUEL
    Beyonce performs at the Grammy Awards in Los Angeles in 2017.
    Atheists Are Sometimes More Religious Than Christians
    SIGAL SAMUEL
    A group of people waits for a pedestrian light at a street corner in
    Colombia.
    The Physicist Modeling ISIS and the Alt-Right
    NATALIE WOLCHOVER
    And then they experiment: Add in 50,000 newcomers, say, and invest
    heavily in education. How does the artificial society change? The
    model tells you. Don’t like it? Just hit that reset button and try a
    different policy.

    The goal of the project is to give politicians an empirical tool that
    will help them assess competing policy options so they can choose the
    most effective one. It’s a noble idea: If leaders can use artificial intelligence to predict which policy will produce the best outcome,
    maybe we’ll end up with a healthier and happier world. But it’s also a dangerous idea: What’s “best” is in the eye of the beholder, after
    all.

    “Because all our models are transparent and the code is always
    online,” said LeRon Shults, who teaches philosophy and theology at the University of Agder in Norway, “if someone wanted to make people more in-group-y, more anxious about protecting their rights and their group
    from the threat of others, then they could use the model to [figure
    out how to] ratchet up anxiety.”

    The Modeling Religion Project—which has collaborators at Boston’s
    Center for Mind and Culture, and the Virginia Modeling, Analysis, and Simulation Center, as well as the University of Agder—has been running
    for the past three years, with funding from the John Templeton
    Foundation. It wrapped up last month. But it’s already spawned several
    spin-off projects.

    The one that focuses most on refugees, Modeling Religion in Norway
    (modrn), is still in its early phases. Led by Shults, it’s funded
    primarily by the Research Council of Norway, which is counting on the
    model to offer useful advice on how the Norwegian government can best
    integrate refugees. Norway is an ideal place to do this research, not
    only because it’s currently struggling to integrate Syrians, but also
    because the country has gathered massive data sets on its population.
    By using them to calibrate his model, Shults can get more accurate and fine-grained predictions, simulating what will happen in a specific
    city and even a specific neighborhood.

    Another project, Forecasting Religiosity and Existential Security with
    an Agent-Based Model, examines questions about nonbelief: Why aren’t
    there more atheists? Why is America secularizing at a slower rate than
    Western Europe? Which conditions would speed up the process of secularization—or, conversely, make a population more religious?

    Shults’s team tackled these questions using data from the
    International Social Survey Program conducted between 1991 and 1998.
    They initialized the model in 1998 and then allowed it to run all the
    way through 2008. “We were able to predict from that 1998 data—in 22
    different countries in Europe, and Japan—whether and how belief in
    heaven and hell, belief in God, and religious attendance would go up
    and down over a 10-year period. We were able to predict this in some
    cases up to three times more accurately than linear regression
    analysis,” Shults said, referring to a general-purpose method of
    prediction that prior to the team’s work was the best alternative.

    Using a separate model, Future of Religion and Secular Transitions
    (forest), the team found that people tend to secularize when four
    factors are present: existential security (you have enough money and
    food), personal freedom (you’re free to choose whether to believe or
    not), pluralism (you have a welcoming attitude to diversity), and
    education (you’ve got some training in the sciences and humanities).
    If even one of these factors is absent, the whole secularization
    process slows down. This, they believe, is why the U.S. is
    secularizing at a slower rate than Western and Northern Europe.

    “The U.S. has found ways to limit the effects of education by keeping
    it local, and in private schools, anything can happen,” said Shults’s collaborator, Wesley Wildman, a professor of philosophy and ethics at
    Boston University. “Lately, there’s been encouragement from the
    highest levels of government to take a less than welcoming cultural
    attitude to pluralism. These are forms of resistance to
    secularization.”

    Another project, Mutually Escalating Religious Violence (merv), aims
    to identify which conditions make xenophobic anxiety between two
    different religious groups likely to spiral out of control. As they
    built this model, the team brought in an outside expert: Monica Toft,
    an international-relations scholar with no experience in computational
    modeling but a wealth of expertise in religious extremism.

    “They brought me in so I could do a reality check—like, do the
    [social-science] assumptions behind this model make sense? And then to
    evaluate whether this tracks with case studies in reality,” Toft told
    me. At first, she said, “I was a little skeptical with this stuff. But
    I think what surprised me was how well it modeled onto the Gujarat
    case.” She was referring to the 2002 riots that erupted in the Indian
    state of Gujarat: three bloody days during which Muslims and Hindus
    clashed violently, resulting in hundreds of deaths on both sides.
    (According to official figures, 790 Muslims and 254 Hindus were
    killed.) “When I started looking at the data, I said to LeRon and
    Wesley, ‘Oh my god!’ Because I knew the case of Gujarat and what
    happened there. It matched the model beautifully. It was really
    exciting.”

    merv shows that mutually escalating violence is likeliest to occur if
    there’s a small disparity in size between the majority and minority
    groups (less than a 70/30 split) and if agents experience out-group
    members as social and contagion threats (they worry that others will
    be invasive or infectious). It’s much less likely to occur if there’s
    a large disparity in size or if the threats agents are experiencing
    are mostly related to predators or natural hazards.

    This might sound intuitive, but having quantitative, empirical data to
    support social-science hypotheses can help convince policymakers of
    when and how to act if they want to prevent future outbreaks of
    violence. And once a model has been shown to track with real-world
    historical examples, scientists can more plausibly argue that it will
    yield a trustworthy recommendation when it’s fed new situations. Asked
    what merv has to offer us, Toft said, “We can stop these dynamics. We
    do not need to allow them to spiral out of control.”

    To that end, the next step is getting others interested in trying out
    the models. But that’s proven difficult. The research has been
    published in outlets like the Journal of Cognition and Culture and is
    under review at Nature, and the team is building an online platform
    that will allow people with zero programming experience to create
    agent-based models. Still, Wildman is pessimistic about his own
    ability to get politicians interested in such a new and highly
    technical methodology.

    “Whenever there’s bafflement, you’ve got a trust problem, and I think
    there will be a trust problem here,” he said. “We’re modelers,
    sociologists, philosophers—we’re academic geeks, basically. We’re
    never going to convince them to trust a model.” But he believes that
    policy analysts, acting as bridges between the academic world and the
    policy world, will be able to convince the politicians. “We’re going
    to get them in the end.”

    Even harder to sway may be those concerned not with the methodology’s
    technical complications, but with its ethical complications. As
    Wildman told me, “These models are equal-opportunity insight
    generators. If you want to go militaristic, then these models tell you
    what the targets should be.”

    When you build a model, you can accidentally produce recommendations
    that you weren’t intending. Years ago, Wildman built a model to figure
    out what makes some extremist groups survive and thrive while others disintegrate. It turned out one of the most important factors is a
    highly charismatic leader who personally practices what he preaches.
    “This immediately implied an assassination criterion,” he said. “It’s basically, leave the groups alone when the leaders are less
    consistent, [but] kill the leaders of groups that have those specific qualities. It was a shock to discover this dropping out of the model.
    I feel deeply uncomfortable that one of my models accidentally
    produced a criterion for killing religious leaders.”

    The results of that model have been published, so it may already have
    informed military action. “Is this type of thing being used to figure
    out criteria for drone killings? I don’t know, because there’s this
    giant wall between the secret research in the U.S. and the non-secret
    side,” Wildman said. “I’ve come to assume that on the secret side
    they’ve pretty much already thought of everything we’ve thought of,
    because they’ve got more money and are more focused on those issues.
    ... But it could be that this model actually took them there. That’s a
    serious ethical conundrum.”

    The other models raise similar concerns, he said. “The modrn model
    gives you a recipe for accelerating secularization—and it gives you a
    recipe for blocking it. You can use it to make everything revert to supernaturalism by messing with some of those key conditions—say, by
    triggering some ecological disaster. Then everything goes plunging
    back into pre-secularism. That keeps me up at night.”

    According to Neil Johnson, a physicist who models terrorism and other
    extreme behaviors that arise in complex systems, “That’s an
    overstatement of the power of the models.” There’s no way that
    removing one factor from a society can reliably be counted on to slow
    or stop secularization, he said. That may well be true in the model,
    but “that’s a cartoon of the real world.” A real human society is so
    complex that “all the things may be interconnected in a different way
    than in the model.”

    Although Johnson said he found the team’s research useful and
    important, he was unimpressed by their claim to have outperformed
    previous predictive methods. “Linear regression analysis is not very
    powerful for prediction,” he said. “I was a little surprised by the
    strength of their claims.” He cautioned that we should be skeptical
    about the word prediction in relation to this type of model. Opinion
    might be better.

    “It’s great to have as a tool,” he said. “It’s like, you go to the
    doctor, they give an opinion. It’s always an opinion, we never say a
    doctor’s prediction. Usually, we go with the doctor’s opinion because
    they’ve seen many cases like this, many humans who come in with the
    same thing. It’s even more of an opinion with these types of models,
    because they haven’t necessarily seen many cases just like it—history
    mimics the past but doesn’t exactly repeat it.”

    The silver lining here is that if the power of the models is being
    overstated then so, too, is the ethical concern.

    Nevertheless, just like Wildman, Shults told me, “I lose sleep at
    night on this. ... It is social engineering. It just is—there’s no
    pretending like it’s not.” But he added that other groups, like
    Cambridge Analytica, are doing this kind of computational work, too.
    And various bad actors will do it without transparency or public accountability. “It’s going to be done. So not doing it is not the
    answer.” Instead, he and Wildman believe the answer is to do the work
    with transparency and simultaneously speak out about the ethical
    danger inherent in it.

    “That’s why our work here is two-pronged: I’m operating as a modeler
    and as an ethicist,” Wildman said. “It’s the best I can do.”

    https://www.theatlantic.com/international/archive/2018/07/artificial-intelligence-religion-atheism/565076/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)