• Risks Digest 33.95 (1/2)

    From RISKS List Owner@21:1/5 to All on Sat Dec 2 23:33:11 2023
    RISKS-LIST: Risks-Forum Digest Saturday 2 December 2023 Volume 33 : Issue 95

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/33.95>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    Commercial Flights Are Experiencing 'Unthinkable' GPS Attacks
    and Nobody Knows What to Do (Vice)
    G7 and EU countries pitch guidelines for AI cybersecurity
    (Joseph Bambridge)
    U.S. and UK Unveil AI Cyber-Guidelines (Politico via PGN)
    Was Argentina the First AI Election? (NYTimes)
    As AI-Controlled Killer Drones Become Reality, Nations Debate Limits,
    (The New York Times)
    Reports that Sports Illustrated used AI-generated stories and fake
    authors are disturbing, but not surprising (Poynter)
    Is Anything Still True? On the Internet, No One Know
    Anymore (WSJ)
    ChatGPT x 3 (sundry sources via Lauren Weinstein)
    Texas Rejects Science Textbooks Over Climate Change, Evolution Lessons
    (WSJ)
    A `silly' attack made ChatGPT reveal real phone numbers
    and email addresses (Engadget)
    Meta/Facebook profiting from sale of counterfeit U.S. stamps
    (Mich Kabay)
    Chaos in the Cradle of AI (The New Yorker)
    Impossibility of Strong watermarks for Generative AI
    Intel hardware vulnerability (Daniel Moghimi at Google_
    Hallucinating language models (Victor Miller)
    USB worm unleashed by Russian state hackers spreads worldwide
    (Ars Technica)
    AutoZone warns almost 185,000 customers of a data breach
    (Engadget)
    Okta admits hackers accessed data on all customers during recent breach
    (TechCrunch)
    USB worm unleashed by Russian state hackers spreads worldwide
    (Ars Technica)
    Microsoft’s Windows Hello fingerprint authentication has been bypassed
    (The Verge)
    Thousands of routers and cameras vulnerable to new 0-day attacks
    by hostile botnet (Ars Technica)
    A Postcard From Driverless San Francisco (Steve Bacher)
    Voting machine trouble in Pennsylvania county triggers alarm ahead of 2024
    (Politico via Steve Bacher)
    Outdated Password Practices are Widespread (Georgia Tech)
    THE CTIL FILES #1 (Shellenberger via geoff goodfellow)
    Judge rules it's fine for car makers to intercept your text messages
    (Henry Baker)
    Protecting Critical Infrastructure from Cyber Attacks (RMIT)
    Crypto Crashed and Everyone's In Jail. Investors Think It's
    Coming Back Anyway. (Vice)
    Feds seize Sinbad crypto mixer allegedly used by North Korean e
    hackers (TechCrunch)
    A lost bitcoin wallet passcode helped uncover a major security flaw
    (WashPost)
    Ontario's Crypto King still jet-setting to UK, Miami, and soon Australia
    despite bankruptcy (CBC)
    British Library confirms customer data was stolen by hackers,
    with outage expected to last months (TechCrunch)
    PSA: Update Chrome browser now to avoid an exploit
    already in the wild (The Verge)
    WeWork has failed. Like a lot of other tech startups, it left damage in its
    wake (CBC)
    Re: The AI Pin (Rob Slade)
    Re: Social media gets teens hooked while feeding aggression and
    impulsivity, and researchers think they know why (C.J.S. Hayward)
    Re: Garble in Schneier's AI post (Steve Singer)
    Re: Using your iPhone to start your car is about to get a
    lot easier (Sam Bull)
    Re: Oveview of the iLeakage Attack (Sam Bull)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Mon, 20 Nov 2023 19:00:14 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Commercial Flights Are Experiencing 'Unthinkable' GPS Attacks
    and Nobody Knows What to Do (Vice)

    New "spoofing" attacks resulting in total navigation failure have been occurring above the Middle East for months, which is "highly significant"
    for airline safety.

    https://www.vice.com/en/article/m7bk3v/commercial-flights-are-experiencing-unthinkable-gps-attacks-and-nobody-knows-what-to-do

    ------------------------------

    Date: Mon, 27 Nov 2023 9:10:36 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: G7 and EU countries pitch guidelines for AI cybersecurity
    (Joseph Bambridge)

    Joseph Bambridge, Politico Europe, 27 Nov 2023

    Cybersecurity authorities in 18 major European and Western countries,
    including all G7 states, today released joint guidelines on how to
    develop artificial intelligence systems in ways that ensure their cybersecurity.

    The United Kingdom, United States, Germany, France, Italy, Australia,
    Japan, Israel, Canada, Nigeria, Poland and others backed what they
    called the world's first AI cybersecurity guidelines. The initiative
    was led by the U.K.'s National Cyber Security Centre and follows
    London's AI Safety Summit that took place early November.

    The 20-page document sets out practical ways providers of AI systems can
    ensure they function as intended, don't reveal sensitive data and aren't
    taken offline by attacks.

    AI systems face both traditional threats and novel vulnerabilities
    like data poisoning and prompt injection attacks, the authorities
    said. The guidelines -- which are voluntary -- set standards for how technologists design, deploy and maintain AI systems with
    cybersecurity in mind.

    The U.K.'s NCSC will present the guidelines at an event Monday after
    noon.

    <https://y3r710.r.eu-west-1.awstrack.me/I0/0102018c10220f9c-cd93ae92-527e-4258-a9b4-5c43adb51332-000000/VBwAxQb3zMQOCAxex0irXa9NdgE=349>

    ------------------------------

    Date: Tue, 28 Nov 2023 11:26:30 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: U.S. and UK Unveil AI Cyber-Guidelines (Politico)

    (Joseph Bambridge, Politico, PGN-ed for RISKS)

    U.S. and UK UNVEIL AI CYBER GUIDELINES

    The UK's National Cyber Security Center and U.S. Cybersecurity and Infrastructure Security Agency on Monday unveiled what they say are the
    world's first AI cyber guidelines, backed by 18 countries including Japan, Israel, Canada and Germany. It's the latest move on the international stage
    to get ahead of the risks posed by AI as companies race to develop more advanced models, and as systems are increasingly integrated in government
    and society.

    ``Overall I would assess them as some of the early formal guidance
    related to the cybersecurity vulnerabilities that derive from both
    traditional and unique vulnerabilities,'' the Center for Strategic and guidelines appeared to be aimed at both traditional cyberthreats and
    new ones that come with the continued advancement of AI technologies.

    Although the guidelines are voluntary, Allen said they could be made
    mandatory for selling to the U.S. federal government for certain types
    of risk-averse activities. In the private sector, Allen said
    companies buying AI technologies could require vendors to demonstrate compliance with the guidelines through third-party certification or
    other means.

    Breaking it down: The guidelines aim to ensure security is a core
    requirement of the entire lifecycle of an AI system, and are focused
    on four themes: secure design, development, deployment and operation.
    Each section has a series of recommendations to mitigate security
    risks and safeguard consumer data, such as threat modeling, incident
    management processes and releasing AI models responsibly.

    Homeland Security Secretary Alejandro Mayorkas said in a statement
    that the guidelines are a ``historic agreement that developers must
    invest in, protecting customers at each step of a system's design and development.''International Studies' Gregory Allen told POLITICO. He said the

    The guidance is closely aligned with the U.S. National Institute of
    Standards and Technology's Secure Software Development Framework
    (which outlines steps for software developers to limit vulnerabilities
    in their products) and CISA's secure-by-design principles, which was
    also released in concert with a dozen other states.

    Acknowledgements: The document includes a thank you to a notable list
    of leading tech companies for their contributions, including Amazon,
    Anthropic, Google, IBM, Microsoft and OpenAI. Also in the mentions
    were Georgetown University's Center for Security and Emerging
    Technology, RAND and the Center for AI Safety and the program for
    Geopolitics, Technology and Governance, both at Stanford.

    Aaron Cooper, VP of global policy at tech trade group BSA | The
    Software Alliance, said in a statement to MT that the guidelines help
    `build a coordinated approach for cybersecurity and artificial
    intelligence,'' something that BSA has been calling for in many of its
    cyber and AI policy recs.

    ------------------------------

    Date: Mon, 20 Nov 2023 11:40:21 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Was Argentina the First AI Election? (NYTimes)

    Jack Nicas and Luc=C3=8Ca Cholakian Herrera
    *The New York Times*, 16 Nov 2023
    via ACM TechNews, November 20, 2023

    Sergio Massa and Javier Milei widely used artificial intelligence (AI) to create images and videos to promote themselves and attack each other prior
    to Sunday's presidential election in Argentina, won by Milei. AI made candidates say things they did not, put them in famous movies, and created campaign posters. Much of the content was clearly fake, but a few creations strayed into the territory of disinformation. Researchers have long worried about the impact of AI on elections, but those fears were largely
    speculative because the technology to produce deepfakes was too expensive
    and unsophisticated. "Now we've seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed," said Henry Ajder, an expert who has
    advised governments on AI-generated content.

    [The losing candidate was destroyed by speculative execution? PGN]

    And a few days later, this item:
    Argentina Elects Milei in Victory for the Far Right
    Jack Nicas, *The New York Times*, 20 Nov 2023, front page of the National
    Edition.
    PGN]

    ------------------------------

    Date: Wed, 22 Nov 2023 16:53:39 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: As AI-Controlled Killer Drones Become Reality, Nations Debate
    Limits (The New York Times)

    An experimental unmanned aircraft at Eglin Air Force Base in Florida. The drone uses artificial intelligence and has the capability to carry weapons, although it has not yet been used in combat.

    As AI-Controlled Killer Drones Become Reality, Nations Debate Limits

    Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant.

    https://www.nytimes.com/2023/11/21/us/politics/ai-drones-war-law.html

    ------------------------------

    Date: Tue, 28 Nov 2023 06:48:00 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Reports that Sports Illustrated used AI-generated stories and fake
    authors are disturbing, but not surprising (Poynter)

    It’s unsettling, especially from such a storied name. But comments from its parent company should have told us it was coming.

    In a story that has generated both shock and disdain, Futurism’s Maggie Harrison reports
    <https://futurism.com/sports-illustrated-ai-generated-writers> that Sports Illustrated published stories that were produced or partially produced by artificial intelligence, and that some stories had bylines its parent
    company should have told us it was coming.

    In a story that has generated both shock and disdain, Futurism’s Maggie Harrison reports
    <https://futurism.com/sports-illustrated-ai-generated-writers> that Sports Illustrated published stories that were produced or partially produced by artificial intelligence, and that some stories had bylines of fake
    authors. To be clear, the disdain was directed at Sports Illustrated.

    But maybe we shouldn't be surprised by any of this, as I’ll explain in a moment. First, the details.

    When asked about fake authors, an anonymous source described as a “person involved with the creation of the content” told Harrison, “There’s a lot. I
    was like, what are they? This is ridiculous. This person does not exist. At
    the bottom (of the page) there would be a photo of a person and some fake description of them like, ‘oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.’ Stuff like that. It’s just crazy.”

    The fake authors even included AI-generated mugshots. If true, that is
    pretty gross — photos of authors who don't actually exist, to go along with made-up bios that included made-up hobbies and even made-up pets. [...]

    https://www.poynter.org/commentary/2023/sports-illustrated-artificial-intelligence-writers-futurism/

    ------------------------------

    Date: Tue, 21 Nov 2023 10:07:10 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Is Anything Still True? On the Internet, No OneK
    Knows Anymore (WSJ)

    New tools can create fake videos and clone the voices of those closest to
    us. This is how authoritarianism arises.

    Creating and disseminating convincing propaganda used to require the
    resources of a state. Now all it takes is a smartphone.

    Generative artificial intelligence is now capable of creating fake
    pictures, clones of our voices <https://www.wsj.com/articles/i-cloned-myself-with-ai-she-fooled-my-bank-and-my-family-356bd1a3>,
    and even videos depicting and distorting world events. The result: From our personal <https://www.wsj.com/tech/fake-nudes-of-real-students-cause-an-uproar-at-a-new-vvxsxsjersey-high-school-df10f1bb>
    circles
    to the political <https://www.wsj.com/world/china/china-is-investing-billions-in-global-disinformation-campaign-u-s-says-88740b85>
    circuses,
    everyone must now question whether what they see and hear is true.

    We've long been warned <https://www.wsj.com/articles/the-world-isnt-as-bad-as-your-wired-brain-tells-you-1535713201>
    about the potential of social media to distort our view of the world <https://www.wsj.com/articles/why-social-media-is-so-good-at-polarizing-us-11603105204>,
    and now there is the potential for more false and misleading information to spread on social media than ever before. Just as importantly, exposure to AI-generated fakes can make us question the authenticity of everything we
    see <https://www.wsj.com/articles/the-deepfake-dangers-ahead-b08e4ecf>.
    Real images and real recordings can be dismissed as fake. ``When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ``I just don't trust anything anymore,'' says David Rand <https://mitsloan.mit.edu/faculty/directory/david-g-rand>, a professor at
    MIT Sloan who studies <https://www.nature.com/articles/s41562-023-01641-6>
    the creation, spread and impact of misinformation.

    This problem, which has grown more acute in the age of generative AI, is
    known as the liar's dividend <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954>, says Renee DiResta, a researcher at the Stanford Internet Observatory.

    The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe,
    adds DiResta, leading to what she calls =9Cbespoke realities <https://www.ribbonfarm.com/2019/12/17/mediating-consent/>.

    Examples of misleading content created by generative AI are not hard to come by, especially on social media. One widely circulated and fake image of Israelis lining the streets in support of their country has many of the hallmarks of being AI-generated <https://www.reuters.com/fact-check/photo-cheering-crowds-waving-israeli-flags-soldiers-is-ai-generated-2023-10-30/>
    including telltale oddities that are apparent if you look closely, such as distorted bodies and limbs. For the same reasons, a widely shared image that purports to show fans at a soccer match in Spain displaying a Palestinian
    flag doesn't stand up <https://factcheck.afp.com/doc.afp.com.33YY7NY> to scrutiny.

    ------------------------------

    Date: Wed, 22 Nov 2023 08:02:57 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: ChatGPT x 3 (sundry sources)

    ChatGPT Replicates Gender Bias in Recommendation Letters https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/

    OpenAI and Microsoft hit with copyright lawsuit from non-fiction authors https://www.engadget.com/openai-and-microsoft-hit-with-copyright-lawsuit-from-non-fiction-authors-101505740.html?src=rss

    ChatGPT generates fake data set to support scientific hypothesis https://www.nature.com/articles/d41586-023-03635-w

    ------------------------------

    Date: Sun, 19 Nov 2023 18:19:43 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Texas Rejects Science Textbooks Over Climate Change, Evolution
    Lessons (WSJ)

    https://www.wsj.com/us-news/education/texas-rejects-science-textbooks-over-climate-change-evolution-lessons-29a2c2ca

    [Do most Texans believe that climate change is a hoax, and evolution is
    impossible because it is inconsistent with the Bible? Or just the
    politicians? Note that dumbing down education will have to apply to
    chatbots as well, if they are used as textbooks. The next step has to be
    legislating that generative AI must not be consistent with established
    history regarding climate change, evolution, slavery, etc.? The only way
    out may be to ban chatbots with truthful training data. We seem to be on
    a very slippery slope with content censorship. PGN]

    ------------------------------

    Date: Thu, 30 Nov 2023 08:50:28 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: A `silly' attack made ChatGPT reveal real phone numbers
    and email addresses (Engadget)

    https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html

    ------------------------------

    Date: Sun, 26 Nov 2023 17:46:22 -0500
    From: <mekabay@gmail.com>
    Subject: Meta/Facebook profiting from sale of counterfeit
    U.S. stamps

    Meta/Facebook post and profit from ads on FB for criminals who sell
    counterfeit U.S. stamps to unsuspecting victims (or to those who choose to ignore warnings such as the one in the next paragraph). Images of the counterfeit stamps at the time of posting are here. I have reported these crimes to the United States Postal Inspection Service and the FBI's Internet Crime Complaint Center. I have also written to Meta about this criminal activity but never received a reply.

    See < http://www.mekabay.com/counterfeit-stamps/ > for images of over 500
    ads on FB for counterfeit US stamps.

    Warning I post online whenever I can:

    These are counterfeit stamps. It is a federal crime to use fake stamps as postage. Don't fall for these scams. https://www.uspis.gov/news/scam-article/counterfeit-stamps

    ------------------------------

    Date: Fri, 24 Nov 2023 19:51:50 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Chaos in the Cradle of AI (The New Yorker)

    The Sam Altman saga at OpenAI underscores an unsettling truth: nobody knows what *AI safety* really means.

    https://www.newyorker.com/science/annals-of-artificial-intelligence/chaos-in-the-cradle-of-ai

    [`AIIIII' sounds like a scream fo help in several languages, PGN]

    ------------------------------

    Date: Sun, 19 Nov 2023 15:39:33 +0000
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: Impossibility of Strong watermarks for Generative AI

    Watermarks have been proposed to allow identification of data (and
    pictures, etc) generated by AI. This paper shows that that goal is
    essentially impossible.

    https://arxiv.org/pdf/2311.04378.pdf

    ------------------------------

    Date: Mon, 27 Nov 2023 15:38:59 -0800
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: Hallucinating language models

    The introduction is really very clear.

    Adam Tauman Kalai, Microsoft Research
    Santosh S. Vempala, Georgia Tech
    Calibrated Language Models Must Hallucinate
    27 Nov 2023

    https://arxiv.org/pdf/2311.14648.pdf

    ------------------------------

    Date: Wed, 22 Nov 2023 21:00:41 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: USB worm unleashed by Russian state hackers spreads worldwide
    (Ars Technica)

    https://arstechnica.com/?p=1985993

    ------------------------------

    Date: Wed, 22 Nov 2023 18:38:06 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: AutoZone warns almost 185,000 customers of a data breach
    (Engadget)

    https://www.engadget.com/autozone-warns-almost-185000-customers-of-a-data-breach-202533437.html

    ------------------------------

    Date: Wed, 29 Nov 2023 20:47:49 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Okta admits hackers accessed data on all customers
    during recent breach (TechCrunch)

    https://techcrunch.com/2023/11/29/okta-admits-hackers-accessed-data-on-all-customers-during-recent-breach/

    [I've seen reports of breaches for several days, but this seems to be the
    first one from Okta. PGN]

    ------------------------------

    Date: Fri, 24 Nov 2023 15:37:03 +0000
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: USB worm unleashed by Russian state hackers spreads worldwide
    (Ars Technica)

    https://arstechnica.com/security/2023/11/normally-targeting-ukraine-russian-state-hackers-spread-usb-worm-worldwide/

    ------------------------------

    Date: Wed, 22 Nov 2023 18:23:24 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Microsoft’s Windows Hello fingerprint authentication has been bypassed
    (The Verge)

    https://www.theverge.com/2023/11/22/23972220/microsoft-windows-hello-fingerprint-authentication-bypass-security-vulnerability

    ------------------------------

    Date: Wed, 22 Nov 2023 20:58:06 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Thousands of routers and cameras vulnerable to new 0-day attacks
    by hostile botnet (Ars Technica)

    https://arstechnica.com/?p=1986211

    ------------------------------

    Date: Wed, 29 Nov 2023 08:53:27 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: A Postcard From Driverless San Francisco

    Unexplained stops. Incensed firefighters. Cars named Oregano. The a
    robotaxis are officially here. Riding with Cruise and Waymo during their
    debut in San Francisco.

    https://www.curbed.com/article/waymo-cruise-driverless-cars-robotaxi-san-francisco.html

    ------------------------------

    Date: Sat, 25 Nov 2023 08:10:12 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Voting machine trouble in Pennsylvania county triggers alarm ahead
    of 2024

    Officials say the issue did not affect the outcome of the votes, but are nonetheless racing to restore voter confidence ahead of next year’s
    election.

    https://www.politico.com/news/2023/11/25/voting-machine-trouble-pennsylvania-00128554

    Excerpt:

    Skeptics [...] say the root of the problem ties back to the basic design
    of the devices, called the ExpressVote XL.

    The machine spits out a paper print-out that records voters’ choices in two ways: a barcode that is used to tabulate their vote and corresponding text
    so they can verify it was input correctly.

    However, in the two races on 7 Nov, the machines swapped voters’ choices in the written section of the ballot -— but not the barcode — if they voted “yes” to retain one judge and “no” for the other.

    ES&S and Northampton officials acknowledged that pre-election software
    testing, which is conducted jointly, should have caught that problem. They
    say an ES&S employee first introduced the error during regular programming meant to prepare the machines for Election Day. [...]

    ------------------------------

    Date: Mon, 20 Nov 2023 19:14:06 -0800
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: Intel hardware vulnerability (Daniel Moghimi at Google_

    We found another vulnerability inside Intel Corporation CPUs. Somehow instruction prefixes that should be ignored mess up the "fast rep string
    mov" FRSM extension and causes invalid instruction execution. This vulnerability with high severity rating has serious consequence for cloud providers. It enables an attacker who is renting a cloud VM to: - DDOS an entire server - Elevates privilege gaining access to the entire server (Confirmed by Intel) https://lnkd.in/guzjT3UD https://lnkd.in/gUn-vAvN

    ------------------------------

    Date: Wed, 22 Nov 2023 10:48:11 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Outdated Password Practices are Widespread
    (Georgia Tech)

    Georgia Tech Research, 17 Nov 23). via ACM TechNews

    A majority of the world's most popular websites are putting users and their data at risk by failing to meet minimum password requirement standards, according to researchers at the Georgia Institute of Technology (Georgia
    Tech). The researchers analyzed 20,000 randomly sampled websites from the Google Chrome User Experience Report, a database of 1 million websites and pages. Using a novel automated tool that can assess a website's password creation policies, they found that many sites permit very short passwords,
    do not block common passwords, and use outdated requirements like complex characters. Georgia Tech's Frank Li said security researchers have
    "identified and developed various solutions and best practices for improving Internet and Web security. It's crucial that we investigate whether those solutions or guidelines are actually adopted in practice to understand
    whether security is improving in reality."

    ------------------------------

    Date: Tue, 28 Nov 2023 19:44:03 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: THE CTIL FILES #1

    Many people insist that governments aren't involved in censorship, but they are. And now, a whistleblower has come forward with an explosive new trove
    of documents, rivaling or exceeding the Twitter Files and Facebook Files in scale and importance.

    [image: image.png]

    US military contractor Pablo Breuer (left), UK defense researcher Sara-Jayne CSJ Terp (center), and Chris Krebs, former director of the U.S. Department
    of Homeland Security's Cybersecurity and Infrastructure Security Agency (DHS-CISA) A whistleblower has come forward with an explosive new trove of documents, rivaling or exceeding the Twitter Files and Facebook Files in
    scale and importance. They describe the activities of an anti-disinformation group called the Cyber Threat Intelligence League, or CTIL, that officially began as the volunteer project of data scientists and defense and
    intelligence veterans but whose tactics over time appear to have been
    absorbed into multiple official projects, including those of the Department
    of Homeland Security (DHS). The CTI League documents offer the missing link answers to key questions not addressed in the Twitter Files and Facebook
    Files. Combined, they offer a comprehensive picture of the birth of the anti-disinformation sector, or what we have called the Censorship Industrial Complex. The whistleblower's documents describe everything from the genesis
    of modern digital censorship programs to the role of the military and intelligence agencies, partnerships with civil society organizations and commercial media, and the use of sock puppet accounts and other offensive techniques. ``Lock your shit down," explains one document about creating
    *your spy* disguise.'' Another explains that while such activities overseas
    are "typically" done by "the CIA and NSA and the Department of Defense," censorship efforts "against Americans" have to be done using private
    partners because the government doesn't have the "legal authority." The whistleblower alleges that a leader of CTI League, a former British intelligence analyst, was *in the room* at the Obama White House in 2017
    when she received the instructions to create a counter-disinformation
    project to stop a "repeat of 2016." Over the last year, Public, Racket, congressional investigators, and others have documented the rise of the Censorship Industrial Complex, a network of over 100 government agencies and nongovernmental organizations that work together to urge censorship by
    social media platforms and spread propaganda about disfavored individuals, topics, and whole narratives. The US Department of Homeland Security's Cybersecurity and Information Security Agency (CISA) has been the center of gravity for much of the censorship, with the National Science Foundation financing the development of censorship and disinformation tools and other federal government agencies playing a supportive role. Emails from CISA's
    NGO and social media partners show that CISA created the Election Integrity Partnership (EIP) in 2020, which involved the Stanford Internet Observatory (SIO) and other US government contractors. EIP and its successor, the
    Virality Project (VP), urged Twitter, Facebook and other platforms to censor social media posts by ordinary citizens and elected officials alike. [...]

    https://twitter.com/shellenberger/status/1729538920487305723

    ------------------------------

    Date: Sun, 19 Nov 2023 17:34:07 +0000
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: Judge rules it's fine for car makers to intercept
    your text messages

    I was worried about this problem the last time I rented a car, because
    I was able to see all the GPS destinations and the phone numbers
    of some of the previous rental customers when I first got into the
    rental car. I didn't want to leave my data available to every subsequent renter.

    But *clearing the GPS, message and phone number data logs* took
    me (a PhD in Computer Science) at least 15 minutes and a significant
    amount of research in order to perform this expunging task on a
    relatively high-end rental car.

    Very few people are going to spend the time while turning in their
    rental car to clear these personal data from the car data logs --
    especially when they're trying like crazy to get to their airplane
    on time!

    There needs to be a *industry-wide standard* for clearing these
    data which takes only a second or two.<<

    Furthermore, the car manufacturers should be liable if these supposedly expunged data are subsequently used illegally -- e.g., for tracking down
    an ex-spouse or for identity theft.

    https://www.malwarebytes.com/blog/news/2023/11/judge-rules-its-fine-for-car-make
    rs-to-intercept-your-text-messages

    Judge rules it's fine for car makers to intercept your text messages

    Posted: November 9, 2023 by Pieter Arntz

    A federal judge has refused to bring back a class action lawsuit that
    alleged four car manufacturers had violated Washington state's privacy
    laws by using vehicles' on-board infotainment systems to record
    customers' text messages and mobile phone call logs.

    The judge ruled that the practice doesn't meet the threshold for an
    illegal privacy violation under state law. The plaintiffs had appealed
    a prior judge's dismissal.

    https://www.documentcloud.org/documents/24133084-22-35448

    Car manufacturers Honda, Toyota, Volkswagen, and General Motors were
    facing five related privacy class action suits. One of those cases,
    against Ford, had been dismissed on appeal previously.

    Infotainment systems in the company's vehicles began downloading and
    storing a copy of all text messages on smartphones when they were
    connected to the system. Once messages have been downloaded, the
    software makes it impossible for vehicle owners to access their
    communications and call logs but does provide law enforcement with
    access, the lawsuit said.

    The Seattle-based appellate judge ruled that the interception and
    recording of mobile phone activity did not meet the Washington Privacy
    Act's (WPA) standard that a plaintiff must prove that "his or her
    business, his or her person, or his or her reputation" has been
    threatened.

    In a recent Lock and Code podcast, we heard from Mozilla researchers
    that the data points that car companies say they can collect on you
    include social security number, information about your religion, your
    marital status, genetic information, disability status, immigration
    status, and race. And they can sell that data to marketers.

    https://www.malwarebytes.com/blog/podcast/2023/09/what-does-a-car-need-to-know-about-your-sex-life

    This is alarming. Given the increasing number of sensors being placed

    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)