• Risks Digest 33.59 (2/2)

    From RISKS List Owner@21:1/5 to All on Tue Jan 3 00:24:59 2023
    [continued from previous message]

    them out nicely. Then the music stopped.

    ------------------------------

    Date: Mon, 12 Dec 2022 20:50:16 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Twitter dissolves Trust and Safety Council, Yoel Roth flees home
    (WashPost)

    Meanwhile, a former top Twitter official fled his home amid attacks
    following Musk tweets. https://www.washingtonpost.com/technology/2022/12/12/musk-twitter-harass-yoel-roth

    ------------------------------

    Date: Mon, 2 Jan 2023 13:29:44 -0500
    From: Jan Wolitzky <jan.wolitzky@gmail.com>
    Subject: Cats disrupt satellite Internet service (Smithsonian Mag)

    Okay, enough with the stories of rats chewing through data cables and
    squirrels self-immolating to cause power blackouts. Here's a story of cats disrupting satellite Internet service because they discovered that Elon
    Musk's Starlink dishes are heated (to prevent snow build-up disrupting Satellite Internet service [!!!]). Cute cat pix included.

    https://www.smithsonianmag.com/smart-news/outdoor-cats-are-using-500-starlink-satellite-dishes-as-self-heating-beds-180979401/

    ------------------------------

    Date: Mon, 19 Dec 2022 14:53:52 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: How Bots Pushing Adult Content Drowned Out Chinese Protest Tweets
    (NYTimes)

    How Bots Pushing Adult Content Drowned Out Chinese Protest Tweets https://www.nytimes.com/interactive/2022/12/19/technology/twitter-bots-china-protests-elon-musk.html

    ------------------------------

    Date: Thu, 22 Dec 2022 14:44:22 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Okta had another security incident, this time involving stolen
    source code (Engadget)

    Okta had another security incident, this time involving stolen source code https://www.engadget.com/okta-stolen-source-code-205601214.html

    ALSO:
    Okta says source code for Workforce Identity Cloud service was copied
    (Ars Technica)

    https://arstechnica.com/information-technology/2022/12/okta-says-source-code-for-workforce-identity-cloud-service-was-copied/

    ------------------------------

    Date: Sat, 24 Dec 2022 08:43:29 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: There is great danger in training an AI to lie...

    https://twitter.com/AlexEpstein/status/1606347326624215040

    ------------------------------

    Date: Fri, 30 Dec 2022 12:09:31 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Code-Generating AI Can Introduce Security Vulnerabilities
    (Kyle Wiggers)

    Kyle Wiggers, TechCrunch, 28 Dec 2022, via ACM TechNews, 30 Dec 2022

    Software engineers who use code-generating artificial intelligence (AI)
    systems are more likely to cause security vulnerabilities in the apps they develop, according to researchers affiliated with Stanford University. Their study looked at Codex, an AI code-generating system developed by research
    lab OpenAI. The researchers recruited developers to use Codex to complete security-related problems across programming languages, including Python, JavaScript, and C. Participants who had access to Codex were more likely to write incorrect and *insecure* solutions to programming problems compared to
    a control group, and they were more likely to say that their insecure
    answers were secure compared to the people in the control.

    ------------------------------

    Date: Tue, 27 Dec 2022 09:35:15 -0700
    From: Rik Farrow <rik@rikfarrow.com>
    Subject: Co-Pilot helps write insecure code

    An article in *The Register* (including the word 'boffins') describes two papers that show that programmers using Co-Pilot think they write more
    secure code, but actually are doing the opposite:

    https://www.theregister.com/2022/12/21/ai_assistants_bad_code/

    Does this suggest that if Skynet becomes a reality, it can be hacked? More
    likely, that the training code used for Co-Pilot started out as insecure
    and buggy.

    ------------------------------

    Date: Thu, 29 Dec 2022 02:18:52 +0000
    From: Richard Marlon Stein <rmstein@protonmail.com>
    Subject: ChatGPT Explains Why AIs like ChatGPT Should Be Regulated
    (Scientific American)

    https://www.scientificamerican.com/article/chatgpt-explains-why-ais-like-chatgpt-should-be-regulated/

    I'm surprised ChatGPT -- AI generally -- didn't suggest self-regulation. The AI-authoring industry appears to favor that approach versus explainability
    via Hagras' criteria (https://www.researchgate.net/publication/328088140_Toward_Human-Understandable_Explainable_AI)
    or the equivalent.

    ------------------------------

    Date: Sun, 25 Dec 2022 18:38:42 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: New bot ChatGPT will force colleges to get creative to prevent
    cheating, experts say (NBC News)

    New bot ChatGPT will force colleges to get creative to prevent cheating, experts say

    Those who work with AI in their classrooms said they're not panicking about ChatGPT, which went viral after its launch last week.

    https://www.nbcnews.com/tech/chatgpt-can-generate-essay-generate-rcna60362

    ------------------------------

    Date: Sun, 11 Dec 2022 11:45:24 -0500
    From: Gene Spafford <spaf@purdue.edu>
    Subject: Re: Dreams of a Future in Big Tech Dim for Computer Science
    Students (RISKS-33.57)

    I have no idea how many computer science curricula include relevant
    courses today.

    ABET certification requires coverage of ethics. The ACM/IEEE curricular recommendations include ethics. So, common curricula generally include the topic.

    Of course, that doesn't mean that it is covered in any meaningful way. I
    know some institutions give it only a passing mention. At others, it is
    likely a topic at the end of some courses that is viewed as expendable when there is more to cover from the syllabus than there is class time in the semester. Thankfully, this is not the case everywhere.

    I haven't found meaningful coverage in many textbooks, which means it is
    easy to overlook. For faculty who are uncomfortable with the topic, or who have no experience in presenting it, this often means the topic is given superficial (if any) coverage in classes.

    In a sense, professional ethics is a CS topic similar to writing safe code:
    It is in the syllabi at most schools but given only a vague hand wave at too many schools because the potential employers of students are more interested
    in a few more weeks of instruction in some fad topic. In the view of
    faculty, students are more likely to get employed if they know how to build
    a blockchain or ML system rather than spend time learning how to employ them
    in an ethical manner, and recent news continues to illustrate the problems
    with that approach.

    To relate a particular positive example: I include a section on professional ethics in every course I have taught at Purdue since I got here 35 years
    ago. I have created both an undergrad and a grad course that include multi-week discussions of ethics (and bias, logical fallacies, and misinformation, among other topics) that seem to be well-received by
    students, although both are electives. A decade ago, the department adopted
    an ethics requirement for grad students. This involves an introductory
    lecture that I give and a requirement to complete the CITI course on responsible conduct of research.

    I'm told by people at companies and government agencies (and by alumni) that they wish other schools devoted time and resources to the topic the way we
    do. Meanwhile, I know we could do more at the undergrad level.

    (I'm writing this as someone who has participated in the development of the last 2 iterations of the ACM Code of Professional Ethics, as an attendee of Terry Bynum's '81 conference[*], and as leader of ACM's committee on publication ethics. So I cannot make any claim to being a *typical* faculty member in this regard or that the Purdue experience is more generalizable.)

    The science-fiction stories of rogue AI, concerns about autonomous weapons systems, issues of cryptocurrency fraud, and the other topics we have seen
    for decades in RISKS (thanks, Peter) are not solely traceable to technical faults -- or even primarily traceable to the technology. They are based on choices and decisions by people who, too often, are thinking about whether
    they can do something rather than whether it is proper to do those things,
    and evaluating the consequences.

    We can definitely do better.

    [Thanks, Spaf. Having known you for so long, this is very helpful.
    Please note:
    Ethics, Liability, and Responsibility (Gene Spafford),
    RISKS-5.60 18 Nov 87
    * Also, two of Terry Bynum's meetings that we both attended were
    NSF Ethics Panel, 1 Nov 1989 at SRI WashDC, and
    The National Conference on Computing and Values, 12-16 Aug 1991 New Haven.
    PGN]

    ------------------------------

    Date: Mon, 19 Dec 2022 06:18:51 +0000
    From: Parnas, David <parnas@mcmaster.ca>
    Subject: Re: Pretty Smart AI (RISKS-33.58)

    A more interesting question is, ``What would Joe Weizenbaum think about ChatGPT?" I think he would be turning over in his grave seeing his
    lessons about Eliza forgotten.

    An even more interesting question is, ``Would anyone trust that technology
    if the results mattered?'' Who?

    ------------------------------

    Date: Tue, 20 Dec 2022 13:06:23 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Re: Pretty Smart AI (RISKS-33.58)

    Q: What is the difference between lento and adagio?//
    A: Lento is a tempo marking that indicates a slow and leisurely pace, while
    adagio is a tempo marking that indicates a slower and more solemn pace.
    C: Correct.
    G: *Lento -- slowly (40--45 BPM)**
    *Largo -- Broadly (45--50 BPM)*//*
    *Adagio --slow and stately (literally, at ease=) (55--65 BPM)*/(

    Those answers appear inconsistent with one another. Google demonstrates that adagio is faster than either lento or largo, but GPT-3's response seems to claim that adagio is slower than lento. Maybe GPT-3 is going by the
    principle that "slow" is slower than "slower," but that's not how one reads
    it when the statements are adjacent to one another.

    ------------------------------

    Date: Mon, 1 Aug 2020 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) is online.
    <http://www.CSL.sri.com/risksinfo.html>
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-33.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 33.59
    ************************

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)