RISKS-LIST: Risks-Forum Digest Saturday 15 February 2020 Volume 31 : Issue 58
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <
http://www.risks.org> as
<
http://catless.ncl.ac.uk/Risks/31.58>
The current issue can also be found at
<
http://www.csl.sri.com/users/risko/risks.txt>
Contents:
The Intelligence Coup of the Century: For decades, the CIA read the
encrypted communications of allies and adversaries (Greg Miller)
The US Fears Huawei Because It Knows How Tempting Backdoors Are (WIRED)
U.S. Charges Chinese Military Officers in 2017 Equifax Hacking (NYTimes)
Voatz: Ballots, Blockchains, and Boo-boos? (MIT via PGN retitling)
Lax FAA oversight allowed Southwest to put millions of
passengers at risk, IG says (WashPost)
Pentagon ordered to halt work on Microsoft's JEDI cloud contract after
Amazon protests (WashPost)
Linux is ready for the end of time (ZDNet)
Google redraws the borders on maps depending on who's looking (WashPost)
Car renter paired car to FordPass, could still control car long after return
(ZDNet)
European Parliament urges oversight for AI (Politico Europe)
AI can create new problems as it solves old ones (Fortune)
AI and Ethics (NJ Tech Weekly)
The future of software testing in 2020: Here's what's coming (Functionize)
Will Past Criminals Reoffend? Humans Are Terrible at Guessing, and Computers
Aren't Much Better (Scientific American)
Apple joins FIDO Alliance, commits to getting rid of passwords (ZDNet)
IRS paper forms vs. COVID-19 (Dan Jacobson)
The Politics of Epistemic Fragmentation (Medium)
Why Is Social Media So Addictive? (Mark D. Griffiths)
The high cost of a free coding bootcamp (The Verge)
Debunking the lone woodpecker theory (Ed Ravin)
Re: Benjamin Netanyahu's election app potentially exposed data for
every Israeli voter (Amos Shapir)
Re: Backhoes, squirrels, and woodpeckers as DoS vectors (Tom Russ)
Re: A lazy fix 20 years ago means the Y2K bug is taking down computers, now
(Martin Ward)
Re: Autonomous vehicles (Stephen Mason)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Tue, 11 Feb 2020 08:53:12 PST
From: "Peter G. Neumann" <
neumann@csl.sri.com>
Subject: The Intelligence Coup of the Century: For decades, the CIA read the
encrypted communications of allies and adversaries (Greg Miller)
Greg Miller, *The Washington Post*, 11 Feb 2020 <
https://www.washingtonpost.com/graphics/2020/world/national-security/cia-crypto-encryption-machines-espionage/>
For more than half a century, governments all over the world trusted a
single company to keep the communications of their spies, soldiers and diplomats secret. That company was secretly run by the CIA, which had the ability to read all those communications for decades.
The company, Crypto AG, got its first break with a contract to build code-making machines for U.S. troops during World War II. Flush with cash,
it became a dominant maker of encryption devices for decades, navigating
waves of technology from mechanical gears to electronic circuits and,
finally, silicon chips and software.
The Swiss firm made millions of dollars selling equipment to more than 120 countries well into the 21st century. Its clients included Iran, military juntas in Latin America, nuclear rivals India and Pakistan, and even the Vatican.
But what none of its customers ever knew was that Crypto AG was secretly
owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company's devices so they could easily break the codes that countries used to send encrypted messages.
The decades-long arrangement, among the most closely guarded secrets of the Cold War, is laid bare in a classified, comprehensive CIA history of the operation obtained by The Washington Post and ZDF, a German public
broadcaster, in a joint reporting project.
The account identifies the CIA officers who ran the program and the
company executives entrusted to execute it. It traces the origin of the venture as well as the internal conflicts that nearly derailed it. It describes ho the U.S. and its allies exploited other nations' gullibility
for years, taking their money and stealing their secrets.
The operation, known first as `Thesaurus' and later `Rubicon', ranks among
the most audacious in CIA history.
[Very long, but remarkably illuminating item abridged for RISKS. PGN]
------------------------------
Date: Thu, 13 Feb 2020 19:04:06 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: The US Fears Huawei Because It Knows How Tempting Backdoors Are
(WIRED)
https://www.wired.com/story/huawei-backdoors-us-crypto-ag/
[See also
https://www.businessinsider.com/us-accuses-huawei-of-spying-through-law-enforcement-backdoors-2020-2
PGN]
------------------------------
Date: Mon, 10 Feb 2020 14:17:46 -0500
From: Monty Solomon <
monty@roscom.com>
Subject: U.S. Charges Chinese Military Officers in 2017 Equifax Hacking
(NYTimes)
https://www.nytimes.com/2020/02/10/us/politics/equifax-hack-china.html
https://www.washingtonpost.com/national-security/justice-dept-charges-four-members-of-chinese-military-in-connection-with-2017-hack-at-equifax/2020/02/10/07a1f7be-4c13-11ea-bf44-f5043eb3918a_story.html
[Let's not forget the massive loss of personal data from the attack on the
Office of Personnel Management. which might be even more damaging.
Reported (for example) in RISKS-28.69,70,71,72,75,80,83,94,95,96 in 2015.
PGN]
------------------------------
Date: Thu, 13 Feb 2020 17:01:05 PST
From: "Peter G. Neumann" <
neumann@csl.sri.com>
Subject: Voatz: Ballots, Blockchains, and Boo-boos? (MIT via PGN retitling)
This is an outstanding paper.
Michael A. Specter, James Koppel, Daniel Weitzner (MIT)
The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz,
the First Internet Voting Application Used in U.S. Federal Elections
https://internetpolicy.mit.edu/wp-content/uploads/2020/02/SecurityAnalysisOfVoatz_Public.pdf
See also some of the subsequent items:
"Their security analysis of the application, called Voatz, pinpoints a
number of weaknesses, including the opportunity for hackers to alter, stop,
or expose how an individual user has voted."
http://news.mit.edu/2020/voting-voatz-app-hack-issues-0213
Voting on Your Phone: New Elections App Ignites Security Debate,
*The New York Times*, 13 Feb 2020
https://www.nytimes.com/2020/02/13/us/politics/voting-smartphone-app.html
Kim Zetter
https://www.vice.com/en_us/article/akw7mp/sloppy-mobile-voting-app-used-in-four-states-has-elementary-security-flaws
The general consensus seems to be that Voatz's responses neither address
their criticisms more give any reasonable assurance.
https://blog.voatz.com/?p=1209 https://www.prnewswire.com/news-releases/new-york-times-profiles-voatz-301004581.html
------------------------------
Date: Tue, 11 Feb 2020 19:33:16 -0800
From: Richard Stein <
rmstein@ieee.org>
Subject: Lax FAA oversight allowed Southwest to put millions of
passengers at risk, IG says (WashPost)
https://www.washingtonpost.com/local/trafficandcommuting/lax-faa-oversight-allowed-southwest-to-put-millions-of-passengers-at-risk-ig-says/2020/02/11/a3fdb714-4d22-11ea-b721-9f4cdc90bc1c_story.html
[That's "lax", not "LAX". PGN]
"The Federal Aviation Administration allowed Southwest Airlines to put
millions of passengers at risk by letting the airline operate planes that
did not meet U.S. aviation standards and by failing to provide its own inspectors with the training needed to ensure the highest degree of safety, according to a report released Tuesday by the Department of Transportation's inspector general."
The flying public experiences elevated risk when FAA inspectors are not qualified or are under-trained to competently fulfill mandated
assignments. Trust but verify rigor is required to ensure life-critical operational readiness. Coffee cup inspections don't cut it.
"The FAA's overreliance on industry-provided risk assessments and failure to dig deeply into many of those assessments is a broader concern raised by several outside experts and reviews following the crashes of two Boeing 737
Max jets that killed 346 people..."
See
http://catless.ncl.ac.uk/Risks/31/17#subj2.1 for an expose' on industry self-regulation efforts, and why the US government promotes the
practice. Alternatively, the EU's precautionary measures regulatory approach might reduce the frequency of disruptive brand outrage incidents and
declining product orders.
------------------------------
Date: Fri, 14 Feb 2020 10:17:27 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Pentagon ordered to halt work on Microsoft's JEDI cloud contract
after Amazon protests (WashPost)
A lawsuit brought by Amazon has forced the Pentagon to again pump the brakes
on an advanced cloud computing system it sought for years, prompting yet another delay the military says will hurt U.S. troops and hinder its
national security mission.
A federal judge Thursday ordered the Pentagon to halt work on the Joint Enterprise Defense Infrastructure cloud computing network, known as JEDI, as the court considers allegations that President Trump improperly interfered
in the bidding process.
The order comes just one day before the Defense Department had planned to
``go live'' with what it has long argued is a crucial national defense priority.
https://www.washingtonpost.com/business/2020/02/13/court-orders-pentagon-halt-work-microsofts-jedi-cloud-contract-after-amazon-protests/
Halt work? ...one day before? ...a crucial national defense priority? Politicize technology decisions? Sounds about right.
------------------------------
Date: Fri, 14 Feb 2020 10:21:08 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Linux is ready for the end of time (ZDNet)
2038 is for Linux what Y2K was for mainframe and PC computing in 2000, but
the fixes are underway to make sure all goes well when that fatal time rolls around. ...
But look at this way: After we fix this, we won't have to worry about 64-bit Linux running out of seconds until 15:30:08 GMT Sunday, December 4, 29,227,702,659. Personally, I'm not going to worry about that one.
https://www.zdnet.com/article/linux-is-ready-for-the-end-of-time/
------------------------------
Date: Fri, 14 Feb 2020 12:21:04 -0800
From: Richard Stein <
rmstein@ieee.org>
Subject: Google redraws the borders on maps depending on who's looking
(WashPost)
Dynamic map border revisions: a catastrophic recipe for navigation errors
and munitions deployment.
------------------------------
Date: Fri, 14 Feb 2020 17:53:13 -0500
From: Mary M Shaw <
mary.shaw@cs.cmu.edu>
Subject: Car renter paired car to FordPass, could still control car long after
return (ZDNet)
Someone rented a Ford from Enterprise and paired it with FordPass to get
remote control. Five months later he could still start and stop the engine, lock and unlock the car, and track it -- remotely. Same thing happened to him a second time.
Recent piece in ZDNet
https://www.zdnet.com/article/he-returned-the-rental-car-long-ago-he-can-still-turn-the-engine-on-via-an-app/
Earlier report in Ars Technica
Text of ZDNet article ...
*He returned the rental car long ago. He can still turn the engine on via an app*
Imagine you've parked your rental car and are walking away. Suddenly, the
car starts up, seemingly on its own. Yes, it's another day in technology
making everything better. ...
You think we're living in the end of times?
No, this is just a transitional period between relative sanity and robot inanity.
The problem, of course, is that our deep, mindless reliance on technology is causing severe disruption.
I'm moved to this fortune cookie thought by the tale of a man who rented a
Ford Expedition from Enterprise. He gave it back and, five months later, he discovered that he could still start its engine, switch it off, lock and
unlock it and even track it. Remotely, that is.
You see,as Ars Technica described last October <
https://arstechnica.com/information-technology/2019/10/five-months-after-returning-rental-car-man-still-has-remote-control/>,
Masamba Sinclair had connected his rental car to FordPass, an app that's presumably very useful. Who wouldn't want to remotely unlock the doors of a
car someone else is renting? Just to imagine their faces, you understand. It
so happened that Sinclair hadn't unpaired his app from the car. Cue the absurdity.
At the time, I thought Sinclair's tale entertaining. But surely the app's vulnerability would be patched, secured or whatever technical verbal emoji
you might choose.
Yet Sinclair just rented another Ford -- this time, a Mustang. And what do
you know, four days after he'd returned it, he could still make the car do things from his phone. Which could have been a touch bemusing to anyone who happened to have subsequently rented it. <
https://arstechnica.com/information-technology/2020/02/rental-car-agency-continues-to-give-remote-control-long-after-cars-are-returned/>
It seems that Ford does offer warning notifications inside the car when it's paired with someone's phone.
Yet if subsequent renters or, indeed, the rental company's cleaners don't
react to such notifications -- or simply don't see them -- a random somebody who happens to still have an app paired to the car may incite some remote action, like a ghostly jump start.
You might think Sinclair should have already disconnected his app from any
car he'd previously rented. Some might grunt, though, that it shouldn't be
his responsibility.
For its part, Enterprise gave Ars a statement that began: "The safety and privacy of our customers is an important priority for us as a company." An important priority, but not the most important priority?
The company added: "Following the outreach last fall, we updated our car cleaning guidelines related to our master reset procedure. Additionally, we instituted a frequent secondary audit process in coordination with Ford. We also started working with Ford and are very near the completion of testing software with them that will automate the prevention of FordPass pairing by rental customers."
Here's the part that always make me curl up on my sofa and offer
intermittent bleats. Why is it that when technologies such as these are implemented, the creators don't sufficiently consider the potential consequences and prevent them from happening?
If Sinclair could so easily keep his app paired to any Ford he'd rented --
and this surely doesn't just apply to Fords -- why wasn't it easy for the
Ford and/or Enterprise to ensure it couldn't happen?
Why does it take a customer to point out the patent insecurity of the system before companies actually do anything about it?
Perhaps one should be grateful that at least nothing grave occurred. But imagine if someone of brittle brains realized they could be the ghost in a machine and really scare a stranger.
Too often, tech companies place the onus on customers to work things out for themselves and even to save themselves. Or, worse, to only discover a breach when it's too late.
Wouldn't it be bracing if tech companies, I don't know, showed a little responsibility in advance?
------------------------------
Date: Thu, 13 Feb 2020 10:08:23 PST
From: "Peter G. Neumann" <
neumann@csl.sri.com>
Subject: European Parliament urges oversight for AI (Politico Europe)
Lawmakers in Strasbourg adopted a resolution calling for strong oversight of artificial intelligence technology, approving the text by hand vote while rejecting six potential amendments. <
https://www.europarl.europa.eu/doceo/document/B-9-2020-0094_EN.pdf>
The document, which was adopted by the Parliament's Committee on Internal Market and Consumer Protection (IMCO) late last month, marks the first time since new lawmakers were elected last year that the assembly takes a
position on what kind of safeguards are needed for automated decision-making processes. It comes as political leaders at the European Commission, the
EU's executive body, are set to initiate far-reaching legislation on
artificial intelligence next week.
------------------------------
Date: Fri, 14 Feb 2020 18:51:07 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: AI can create new problems as it solves old ones (Fortune)
Some of the world's biggest companies are relying on AI to build a better workforce. But be warned: The tech can create new problems even as it
solves old ones. ...
In his Amsterdam offices, about an hour's drive from his company's largest non-American ketchup factory, Pieter Schalkwijk spends his days crunching
data about his colleagues. And trying to recruit more: As head of Kraft
Heinz's talent acquisition for Europe, the Middle East, and Africa,
Schalkwijk is responsible for finding the right additions to his region's 5,600-person team.
It's a high-volume task. Recently, for an entry-level trainee program, Schalkwijk received 12,000 applications -- for 40 to 50 openings. Which is
why, starting in the fall of 2018, thousands of recent university graduates each spent half an hour playing video games. ``I think the younger
generation is a bit more open to this way of recruiting,'' Schalkwijk says.
The games were cognitive and behavioral tests developed by startup
Pymetrics, which uses artificial intelligence to assess the personality
traits of job candidates. One game asked players to inflate balloons by
tapping their keyboard space bar, collecting (fake) money for each hit until they chose to cash in—or until the balloon burst, destroying the
payoff. (Traits evaluated: appetite for and approach to risk.) Another
measured memory and concentration, asking players to remember and repeat increasingly long sequences of numbers. Other games registered how generous
and trusting (or skeptical) applicants might be, giving them more fake money and asking whether they wanted to share any with virtual partners. [...]
Still, he too is proceeding cautiously. For example, Kraft Heinz will likely never make all potential hires play the Pymetrics games. ``For generations that haven't grown up gaming, there's still a risk'' of age discrimination, Schalkwijk says.
He's reserving judgment on the effectiveness of Pymetrics until this
summer's performance reviews, when he'll get the first full assessment of whether this machine-assisted class of recruits is better or worse than previous, human-hired ones. The performance reviews will be data-driven but conducted by managers with recent training in avoiding unconscious
bias. There's a limit to what the company will delegate to the machines.
AI ``can help us and it will help us, but we need to keep checking that it's doing the right thing, Humans will still be involved for quite some time to come.''
https://fortune.com/longform/hr-technology-ai-hiring-recruitment/
But ... how can it work without quantum computing hosted blockchain?
------------------------------
Date: Thu, 13 Feb 2020 07:06:49 -0500
From: DrM <
notable@mindspring.com>
Subject: AI and Ethics (NJ Tech Weekly)
https://njtechweekly.com/ai-and-ethics-part-1-will-vulnerable-ai-disrupt-the-2020-elections/
[We're doomed... Rebecca Mercuri]
------------------------------
Date: Wed, 12 Feb 2020 18:12:27 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: The future of software testing in 2020: Here's what's coming
(Functionize)
Artificial intelligence and machine learning aren't the only changes to
expect in QA, but they're a big part of it.
https://www.functionize.com/blog/the-future-of-software-testing-in-2020-heres-whats-coming/
------------------------------
Date: Fri, 14 Feb 2020 15:21:07 -0800
From: Richard Stein <
rmstein@ieee.org>
Subject: Will Past Criminals Reoffend? Humans Are Terrible at Guessing,
and Computers Aren't Much Better (Scientific American)
https://www.scientificamerican.com/article/will-past-criminals-reoffend-humans-are-terrible-at-guessing-and-computers-arent-much-better/
"Although all of the researchers agreed that algorithms should be applied cautiously and not blindly trusted, tools such as COMPAS and LSI-R are
already widely used in the criminal justice system. 'I call it techno
utopia, this idea that technology just solves our problems,' Farid says. 'If the past 20 years have taught us anything, [they] should have taught us that that is simply not true.'"
In "Talking to Strangers: What We Should Know about the People We Don't
Know," Malcolm Gladwell discusses judges during an arraignment hearing to determine "own recognizance release," or to imprison a suspect based on numerous factors. What tips a judge's decision to release or hold?
Judges study prior criminal history, the crime, eyeball the suspect, etc. Do they always make a correct determination? No. News reports tragically
document instances when a judge mistakenly interprets a suspect's public
safety assessment, should the suspect commit a crime while on bail and
caught.
https://www.govtech.com/public-safety/Civil-Rights-Groups-Call-for-Reforms-on-Use-of-Algorithms-to-Determine-Bail-Risk.html
discusses algorithmic public safety assessments which can assist judicial
bail decisions.
Risk: State or Federal legislation that establishes algorithmic priority
over human judicial ruling.
------------------------------
Date: Wed, 12 Feb 2020 18:07:46 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Apple joins FIDO Alliance, commits to getting rid of passwords (ZDNet)
Passwords are a notorious security mess. The FIDO Alliance wants to replace them with better, more secure technology and now Apple is it them in this effort.
https://www.zdnet.com/article/apple-joins-fido-alliance-commits-to-getting-rid-of-passwords/
...I wonder about non-tech people reacting to and adopting this...
------------------------------
Date: Fri, 14 Feb 2020 12:50:16 +0800
From: Dan Jacobson <
jidanni@jidanni.org>
Subject: IRS paper forms vs. COVID-19
In some cases* the US IRS still accepts only paper tax forms. Compare this
to the government's FBAR form, which can be filed only electronically.
But in some COVID-19 areas, paper mail is no longer an option...
* E.g., Form 5329, when filed separately.
------------------------------
Date: Wed, 12 Feb 2020 18:49:02 -0500
From: John Ohno <
john.ohno@gmail.com>
Subject: The Politics of Epistemic Fragmentation (Medium)
https://medium.com/the-weird-politics-review/the-politics-of-epistemic-fragmentation-175d6bbb98a4?source=3Dfriends_link&sk=3Deaa79383d2d43444507d0053f9803e1b
Over the past few years, it has seemed as though the only thing real news outlets can agree on is the danger of *fake news*.
Foreign powers or domestic traitors are accused of engineering political divisions, creating *polarization*, and seeding arbitrary disinformation for the sole purpose of making it impossible for people from different
subcultures to communicate. This is blamed on the Internet (and, more specifically, social media) -- and there is some truth to this accusation. However, as is often the case with new communication technologies, social
media has not accelerated this tendency towards disinformation so much as it has made it more visible and legible. <
https://modernmythology.net/contra-ovadya-on-post-truth-83bb15acce7c?source=3Dfriends_link&sk=3Dc0aed65c0f5befe2a1e241efd8d695e3>
When widespread Internet access broke down our sense of a collective
reality, what it was toppling was not the legacy of the Enlightenment, but instead an approximately 100-year bubble in media centralization. Current
norms around meaning-making cannot survive the slow collision with
widespread private ownership of duplication & broadcast technologies.
These norms are built around an assumption that consensus is normal and desirable among people who communicate with each other -- in other words,
that whenever people calmly and rationally communicate, they will come to an understanding about base reality. This ignores the role of power relations
in communication: in modern, liberal contexts, the party that can perform
calm diplomatic rationality the best will win, and the best way to remain
calm and diplomatic is to know that if you fail in your attempts at
diplomacy, a technically advanced army will continue that diplomacy through more direct means. It also ignores the potential value of ideas (including myths) to people who do not fully understand their mechanism of action --
what the rationalist community calls *Chesterton's Fence*.
Just as we benefit from medical innovations like SSRIs and anesthesia
without knowing how or why they work, many cultures benefit from beliefs
that aren't grounded in observation, deduction, or strong evidence that
they correspond to base reality -- but, rather, by the fact that everybody
who didn't hold those beliefs eventually died for reasons that remain
obscure.
In situations of extreme cosmopolitanism, where people from different
cultures and environments communicate on equal terms, there will be disagreements that cannot be dismissed as merely aesthetic preferences or historical relics -- but that nevertheless cannot be worked out through
debate or discussion, simply because discovering their material bases is a project of immense complexity.
Epistemic fragmentation -- the tendency for different people to have
different sources of knowledge and different, often conflicting,
understandings -- is irreducible, and epistemic centralization -- the centralized control of shared sources of information -- cannot provide a universally-applicable shared understanding of the world.
We should be wary of attempts to solve this problem through `trust in institutions' -- in other words, through a return to the epistemic centralization that characterized the twentieth century.
This epistemic centralization was produced by tight control over broadcast communication -- organizations were `trusted' because they had the power (through reserves of capital, ownership of expensive equipment, and/or
explicit government support) to reach many people with the same messages,
but they were not `trustworthy' in the sense that they did not (and could
not) accurately report on reality. While plenty of these organizations
worked in good faith to be responsible and accurate, no handful of organizations has the manpower to report upon and fact check everything important.
Organizational or institutional meaning-making is a slightly scaled-up form
of individual meaning-making.
An institution provides a structure for organizing individual work, and
this structure organizes flows of resources and information.
These flows control what information can be expressed externally by
enforcing broadcast norms, house style, determining what sections are
allocated to what topics and determining what counts as newsworthy based on whether or not it fits into any of these topics, and so on; they control
what information can be expressed internally, based on norms about
professional communication, expectations about shared spaces (like DC
reporters socializing after-hours in particular bars, or tech and culture
beat journalists socializing on twitter where strict character counts force
a terse style), and social hierarchy and stigma around covering particular topics; they control what material can even be effectively researched
through the control of resources like travel expenses, deadline length, and materials for stunt-reporting.
All of these actions are essentially filters: they prevent journalists from researching and reporting on a wide variety of things they would like to
cover, while producing incentives to cover a handful of specific things. Because of this, no institution can produce better-quality meaning (i.e., meaning formed by serious consideration of a wider variety of sources) than
the individuals working for it could produce under a looser confederation, assuming the resources necessary for access remained available.
Consensus reality is merely a side effect of ignoring or erasing the pieces that cannot be made legible and cannot be made to fit any narrative or
model -- and this erasure is political, in the sense that it shapes what can
be imagined and what can be spoken about.
We cannot effectively consider topics we are not allowed to discuss; we
cannot make good personal decisions about topics we cannot effectively consider; we cannot make good collective decisions on topics about which we cannot make good personal decisions; therefore, the soft-censorship necessitated by the limited resources of the centralized meaning-making
that engineers the illusion of consensus reality prevents politics from effectively addressing problems that affect only a few but that require
mass action and solidarity to solve <
https://medium.com/the-weird-politics-review/my-revolution-was-never-a-possibility-notes-on-adhd-anarchism-and-accelerationism-ed9d5113f9e0>
.
The private supplementation of centralized shared knowledge is insufficient.
The twentieth century model of broadcast media is an extension of earlier models of (print-based) publishing: in the beginning, printing presses and radio stations are expensive and a handful of early experimenters create content for a handful of early adopters; as equipment costs drop, more
people get into the market, leading to a push to regulate and
re-centralize helmed
on one side by the biggest players in the market and on the other by folks
who are concerned about signal-to-noise ratio.
This leads to self-enforced standards -- rules for journalism, for instance
-- along with state-enforced measures to create a `legitimate' class and separate it from `illegitimate' amateurs -- copyright, spectrum
subdivisions, broadcast content rules.
Broadcast mechanisms have typically remained expensive, regardless of how technology has progressed: prior to widespread Internet access, the
cheapest broadcast medium (in terms of the ability of an individual of
modest means to reach many people) was the production of xerox pamphlets -- tens of cents per copy, plus postage.
With the Internet, copying has a much lower cost & can be performed without
the direct, intentional involvement of recipients --what costs do exist can
be automatically distributed more evenly, rather than being concentrated in
the hands of some central node.
(Because of a historical mistake, the web concentrates costs centrally, but peer to peer communications technologies do not.) <
https://medium.com/@enkiv2/there-was-never-an-open-web-69194f9b1cf1?source=3Dfriends_link&sk=3D7aed9c67a373e1334f671be2b0b78afc>
This breaks the economic justification for a tendency toward
re-centralization in the distribution of information -- a justification that had previously made the institutionalization of consensus-making
unavoidable.
Prior to widespread literacy and widespread access to oral mass broadcast media, consensus-making and meaning-making was a social process rather than
a parasocial one.
Time-binding technologies -- mechanisms to permanently record and retrieve information, so that information that originated long ago or far away could
be transmitted without distortion -- were limited to print.
Writing, in the absence of mass-production technologies, had more of an
oral aspect to it: while Babylonian kings would manufacture negative molds
for exactly reprinting laws, manuscripts were largely transcribed by
students in lecture halls who included the lecturer's asides in their transcriptions alongside their own notes, and these modified manuscripts
would be the basis for later lectures or would be copied by hand. In other words, before print, it was rare for even writing to be `broadcast' in the sense of a large number of people receiving exactly the same information,
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)