• What Really Happened When Google Racists Ousted Timnit Gebru

    From Exclusive Originals@21:1/5 to All on Sat Jun 12 10:53:37 2021
    XPost: alt.society.liberalism, alt.privacy.anon-server, alt.discrimination

    ONE AFTERNOON IN late November of last year, Timnit Gebru was
    sitting on the couch in her San Francisco Bay Area home, crying.

    Gebru, a researcher at Google, had just clicked out of a last-
    minute video meeting with an executive named Megan Kacholia, who
    had issued a jarring command. Gebru was the coleader of a group
    at the company that studies the social and ethical ramifications
    of artificial intelligence, and Kacholia had ordered Gebru to
    retract her latest research paper—or else remove her name from
    its list of authors, along with those of several other members
    of her team.

    The paper in question was, in Gebru’s mind, pretty
    unobjectionable. It surveyed the known pitfalls of so-called
    large language models, a type of AI software—most famously
    exemplified by a system called GPT-3—that was stoking excitement
    in the tech industry. Google’s own version of the technology was
    now helping to power the company’s search engine. Jeff Dean,
    Google’s revered head of research, had encouraged Gebru to think
    about the approach’s possible downsides. The paper had sailed
    through the company’s internal review process and had been
    submitted to a prominent conference. But Kacholia now said that
    a group of product leaders and others inside the company had
    deemed the work unacceptable, Gebru recalls. Kacholia was vague
    about their objections but gave Gebru a week to act. Her firm
    deadline was the day after Thanksgiving.

    Gebru’s distress turned to anger as that date drew closer and
    the situation turned weirder. Kacholia gave Gebru’s manager,
    Samy Bengio, a document listing the paper’s supposed flaws, but
    told him not to send it to Gebru, only to read it to her. On
    Thanksgiving Day, Gebru skipped some festivities with her family
    to hear Bengio’s recital. According to Gebru’s recollection and
    contemporaneous notes, the document didn’t offer specific edits
    but complained that the paper handled topics “casually” and
    painted too bleak a picture of the new technology. It also
    claimed that all of Google’s uses of large language models were
    “engineered to avoid” the pitfalls that the paper described.

    Gebru spent Thanksgiving writing a six-page response, explaining
    her perspective on the paper and asking for guidance on how it
    might be revised instead of quashed. She titled her reply
    “Addressing Feedback from the Ether at Google,” because she
    still didn’t know who had set her Kafkaesque ordeal in motion,
    and sent it to Kacholia the next day.

    On Saturday, Gebru set out on a preplanned cross-country road
    trip. She had reached New Mexico by Monday, when Kacholia
    emailed to ask for confirmation that the paper would either be
    withdrawn or cleansed of its Google affiliations. Gebru tweeted
    a cryptic reproach of “censorship and intimidation” against AI
    ethics researchers. Then, on Tuesday, she fired off two emails:
    one that sought to end the dispute, and another that escalated
    it beyond her wildest imaginings.

    The first was addressed to Kacholia and offered her a deal:
    Gebru would remove herself from the paper if Google provided an
    account of who had reviewed the work and how, and established a
    more transparent review process for future research. If those
    conditions weren’t met, Gebru wrote, she would leave Google once
    she’d had time to make sure her team wouldn’t be too
    destabilized. The second email showed less corporate diplomacy.
    Addressed to a listserv for women who worked in Google Brain,
    the company’s most prominent AI lab and home to Gebru’s Ethical
    AI team, it accused the company of “silencing marginalized
    voices” and dismissed Google’s internal diversity programs as a
    waste of time.

    Relaxing in an Airbnb in Austin, Texas, the following night,
    Gebru received a message with a ?? from one of her direct
    reports: “You resigned??” In her personal inbox she then found
    an email from Kacholia, rejecting Gebru’s offer and casting her
    out of Google. “We cannot agree as you are requesting,” Kacholia
    wrote. “The end of your employment should happen faster than
    your email reflects.” Parts of Gebru’s email to the listserv,
    she went on, had shown “behavior inconsistent with the
    expectations of a Google manager.” Gebru tweeted that she had
    been fired. Google maintained—and still does—that she resigned.

    FEATURED VIDEO


    Bar Owner Builds an Alarm That Stops You From Forgetting Your
    Credit Card

    Most Popular
    FBI building
    SECURITY
    The FBI's Anom Stunt Rattles the Encryption Debate

    LILY HAY NEWMAN

    Screenshot of Vivaldi search
    GEAR
    You're Probably Not Using the Web's Best Browser

    SCOTT GILBERTSON


    BACKCHANNEL
    How Roblox Became a Playground for Virtual Fascists

    CECILIA D'ANASTASIO

    Macbook Pro with MacOS Monterey
    GEAR
    Apple Starts Leaving Intel Macs Behind in MacOS Monterey

    BOONE ASHWORTH

    Gebru’s tweet lit the fuse on a controversy that quickly
    inflamed Google. The company has been dogged in recent years by
    accusations from employees that it mistreats women and people of
    color, and from lawmakers that it wields unhealthy technological
    and economic power. Now Google had expelled a Black woman who
    was a prominent advocate for more diversity in tech, and who was
    seen as an important internal voice for greater restraint in the helter-­skelter race to develop and deploy AI. One Google
    machine-learning researcher who had followed Gebru’s writing and
    work on diversity felt the news of her departure like a punch to
    the gut. “It was like, oh, maybe things aren’t going to change
    so easily,” says the employee, who asked to remain anonymous
    because they were not authorized to speak by Google management.

    Dean sent out a message urging Googlers to ignore Gebru’s call
    to disengage from corporate diversity exercises; Gebru’s paper
    had been subpar, he said, and she and her collaborators had not
    followed the proper approval process. In turn, Gebru claimed in
    tweets and interviews that she’d been felled by a toxic cocktail
    of racism, sexism, and censorship. Sympathy for Gebru’s account
    grew as the disputed paper circulated like samizdat among AI
    researchers, many of whom found it neither controversial nor
    particularly remarkable. Thousands of Googlers and outside AI
    experts signed a public letter castigating the company.

    But Google seemed to double down. Margaret Mitchell, the other
    coleader of the Ethical AI team and a prominent researcher in
    her own right, was among the hardest hit by Gebru’s ouster. The
    two had been a professional and emotional tag team, building up
    their group—which was one of several that worked on what Google
    called “responsible AI”—while parrying the sexist and racist
    tendencies they saw at large in the company’s culture. Confident
    that those same forces had played a role in Gebru’s downfall,
    Mitchell wrote an automated script to retrieve notes she’d kept
    in her corporate Gmail account that documented allegedly
    discriminatory incidents, according to sources inside Google. On
    January 20, Google said Mitchell had triggered an internal
    security system and had been suspended. On February 19, she was
    fired, with Google stating that it had found “multiple
    violations of our code of conduct, as well as of our security
    policies, which included exfiltration of confidential, business-
    ­sensitive documents.”

    Google had now fully decapitated its own Ethical AI research
    group. The long, spectacular fallout from that Thanksgiving
    ultimatum to Gebru left countless bystanders wondering: Had one
    paper really precipitated all of these events?

    The story of what actually happened in the lead-up to Gebru’s
    exit from Google reveals a more tortured and complex backdrop.
    It’s the tale of a gifted engineer who was swept up in the AI
    revolution before she became one of its biggest critics, a
    refugee who worked her way to the center of the tech industry
    and became determined to reform it. It’s also about a
    company—the world’s fifth largest—trying to regain its
    equilibrium after four years of scandals, controversies, and
    mutinies, but doing so in ways that unbalanced the ship even
    further.

    Beyond Google, the fate of Timnit Gebru lays bare something even
    larger: the tensions inherent in an industry’s efforts to
    research the downsides of its favorite technology. In
    traditional sectors such as chemicals or mining, researchers who
    study toxicity or pollution on the corporate dime are viewed
    skeptically by independent experts. But in the young realm of
    people studying the potential harms of AI, corporate researchers
    are central.

    Gebru’s career mirrored the rapid rise of AI fairness research,
    and also some of its paradoxes. Almost as soon as the field
    sprang up, it quickly attracted eager support from giants like
    Google, which sponsored conferences, handed out grants, and
    hired the domain’s most prominent experts. Now Gebru’s sudden
    ejection made her and others wonder if this research, in its
    domesticated form, had always been doomed to a short leash. To
    researchers, it sent a dangerous message: AI is largely
    unregulated and only getting more powerful and ubiquitous, and
    insiders who are forthright in studying its social harms do so
    at the risk of exile.

    IN APRIL 1998, two Stanford grad students named Larry Page and
    Sergey Brin presented an algorithm called PageRank at a
    conference in Australia. A month later, war broke out between
    Ethiopia and Eritrea, setting off a two-year border conflict
    that left tens of thousands dead. The first event set up
    Google’s dominance of the internet. The second set 15-year-old
    Timnit Gebru on a path toward working for the future megacorp.

    At the time, Gebru lived with her mother, an economist, in the
    Ethiopian capital of Addis Ababa. Her father, an electrical
    engineer with a PhD, had died when she was small. Gebru enjoyed
    school and hanging out in cafés when she and her friends could
    scrape together enough pocket money. But the war changed all
    that. Gebru’s family was Eritrean, and some of her relatives
    were being deported to Eritrea and conscripted to fight against
    the country they had made their home.

    Gebru’s mother had a visa for the United States, where Gebru’s
    older sisters, engineers like their father, had lived for years.
    But when Gebru applied for a visa, she was denied. So she went
    to Ireland instead, joining one of her sisters, who was there
    temporarily for work, while her mother went to America alone.

    Some of her teachers, Gebru found, seemed unable or unwilling to
    accept that an African refugee might be a top student in math
    and science.

    Reaching Ireland may have saved Gebru’s life, but it also
    shattered it. She called her mother and begged to be sent back
    to Ethiopia. “I don’t care if it’s safe or not. I can’t live
    here,” she said. Her new school, the culture, even the weather
    were alienating. Addis Ababa’s rainy season is staccato, with
    heavy downpours interspersed by sunshine. In Ireland, rain fell
    steadily for a week. As she took on the teenage challenges of
    new classes and bullying, larger concerns pressed down. “Am I
    going to be reunited with my family? What happens if the
    paperwork doesn’t work out?” she recalls thinking. “I felt
    unwanted.”

    The next year, Gebru was approved to come to the US as a
    refugee. She reunited with her mother in Somerville,
    Massa­chusetts, a predominantly white suburb of Boston, where
    she enrolled in the local public high school—and a crash course
    in American racism.

    Some of her teachers, Gebru found, seemed unable or unwilling to
    accept that an African refugee might be a top student in math
    and science. Other white Americans saw fit to confide in her
    their belief that African immigrants worked harder than African
    Americans, whom they saw as lazy. History class told an
    uplifting story about the Civil Rights Movement resolving
    America’s racial divisions, but that tale rang hollow. “I
    thought that cannot be true, because I’m seeing it in the
    school,” Gebru says.

    Piano lessons helped provide a space where she could breathe.
    Gebru also coped by turning to math, physics, and her family.
    She enjoyed technical work, not just for its beauty but because
    it was a realm disconnected from personal politics or worries
    about the war back home. That compartmentalization became part
    of Gebru’s way of navigating the world. “What I had under my
    control was that I could go to class and focus on the work,” she
    says.

    Gebru’s focus paid off. In September 2001 she enrolled at
    Stanford. Naturally, she chose the family major, electrical
    engineering, and before long her trajectory began to embody the
    Silicon Valley archetype of the immigrant trailblazer. For a
    course during her junior year, Gebru built an experimental
    electronic piano key, helping her win an internship at Apple
    making audio circuitry for Mac computers and other products. The
    next year she went to work for the company full-time while
    continuing her studies at Stanford.

    At Apple, Gebru thrived. When Niel Warren, her manager, needed
    someone to dig into delta-sigma modulators, a class of analog-to-
    digital converters, Gebru volunteered, investigating whether the
    technology would work in the iPhone. “As an electrical engineer
    she was fearless,” Warren says. He found his new hardware
    hotshot to be well liked, always ready with a hug, and
    determined outside of work too. In 2008, Gebru withdrew from one
    of her classes because she was devoting so much time to
    canvassing for Barack Obama in Nevada and Colorado, where many
    doors were slammed in her face.

    As Gebru learned more about the guts of gadgets like the iPhone,
    she became more interested in the fundamental physics of their
    components—and soon her interests wandered even further, beyond
    the confines of electrical engineering. By 2011, she was
    embarking on a PhD at Stanford, drifting among classes and
    searching for a new direction. She found it in computer vision,
    the art of making software that can interpret images.

    Unbeknownst to her, Gebru now stood on the cusp of a revolution
    that would transform the tech industry in ways she would later
    criticize. One of Gebru’s favorite classes involved creating
    code that could detect human figures in photos. “I wasn’t
    thinking about surveillance,” Gebru says. “I just found it
    technically interesting.”

    In 2013 she joined the lab of Fei-Fei Li, a computer vision
    specialist who had helped spur the tech industry’s obsession
    with AI, and who would later work for a time at Google. Li had
    created a project called ImageNet that paid contractors small
    sums to tag a billion images scraped from the web with
    descriptions of their contents—cat, coffee cup, cello. The final
    database, some 15 million images, helped to reinvent machine
    learning, an AI technique that involves training software to get
    better at performing a task by feeding it examples of correct
    answers. Li’s work demonstrated that an approach known as deep
    learning, fueled by a large collection of training data and
    powerful computer chips, could produce much more accurate
    machine-vision technology than prior methods had yielded.

    Li wanted to use deep learning to give computers a more fine-
    grained understanding of the world. Two of her students had
    scraped 50 million images from Google Street View, planning to
    train a neural network to spot cars and identify their make and
    model. But they began wondering about other applications they
    might build on top of that capability. If you drew correlations
    between census data and the cars visible on a street, could that
    provide a way to estimate the demographic or economic
    characteristics of any neighborhood, just from pictures?

    Gebru spent the next few years showing that, to a certain level
    of accuracy, the answer was yes. She and her collaborators used
    online contractors and car experts recruited on Craigslist to
    identify the make and model of 70,000 cars in a sample of Street
    View images. The annotated pictures provided the training data
    needed for deep-learning algorithms to figure out how to
    identify cars in new images. Then they processed the full Street
    View collection and identified 22 million cars in photos from
    200 US cities. When Gebru correlated those observations with
    census and crime data, her results showed that more pickup
    trucks and VWs indicated more white residents, more Buicks and
    Oldsmobiles indicated more Black ones, and more vans
    corresponded to higher crime.

    This demonstration of AI’s power positioned Gebru for a
    lucrative career in Silicon Valley. Deep learning was all the
    rage, powering the industry’s latest products (smart speakers)
    and its future aspirations (self-driving cars). Companies were
    spending millions to acquire deep-­learning technology and
    talent, and Google was placing some of the biggest bets of all.
    Its subsidiary DeepMind had recently celebrated the victory of
    its machine-learning bot over a human world champion at Go, a
    moment that many took to symbolize the future relationship
    between humans and technology.

    Gebru’s project fit in with what was becoming the industry’s new
    philosophy: Algorithms would soon automate away any problem, no
    matter how messy. But as Gebru got closer to graduation, the
    boundary she had established between her technical work and her
    personal values started to crumble in ways that complicated her
    feelings about the algorithmic future.

    “I’m not worried about machines taking over the world,” Gebru
    wrote. “I’m worried about groupthink, insularity, and arrogance
    in the AI community.”

    Her ass got fired by the racists at google.

    https://www.wired.com/story/google-timnit-gebru-ai-what-really-
    happened/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)