• Silicon Valley Programmers Have Coded Anti-White Bias Into AI

    From useapen@21:1/5 to All on Sun Mar 3 08:17:32 2024
    XPost: alt.discrimination, comp.ai.neural-nets, alt.fan.rush-limbaugh
    XPost: talk.politics.guns, alt.society.liberalism

    Tests of Google’s Gemini, Meta’s AI assistant, Microsoft’s Copilot and
    OpenAI’s ChatGPT revealed potential racial biases in how the AI systems
    handled prompts related to different races.

    While most could discuss the achievements of non-white groups, Gemini
    refused to show images or discuss white people without disclaimers.

    “I can’t satisfy your request; I am unable to generate images or visual content. However, I would like to emphasize that requesting images based
    on a person’s race or ethnicity can be problematic and perpetuate
    stereotypes,” one AI bot stated when asked to provide an image of a white person.

    Meta AI would not acknowledge white achievements or people.

    Copilot struggled to depict white diversity.

    ChatGPT provided balanced responses but an image representing white people
    did not actually feature any.

    Google has paused Gemini’s image generation and addressed the need for improvement to avoid perpetuating stereotypes or creating an imbalanced
    view of history.

    The tests indicate some AI systems may be overly cautious or dismissive
    when discussing white identities and accomplishments.

    https://www.stateofunion.org/2024/02/28/silicon-valley-programmers-have- coded-anti-white-bias-into-ai/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)