• Risks Digest 31.81 (1/2)

    From RISKS List Owner@21:1/5 to All on Fri May 8 20:06:33 2020
    RISKS-LIST: Risks-Forum Digest Friday 8 May 2020 Volume 31 : Issue 81

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/31.81>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    U.S. government plans to urge states to resist 'high-risk' Internet voting
    (Kim Zetter)
    Trading computer can't handle negative numbers (Henry Baker)
    Nearly 20,000 Georgia Teens Are Issued Driver's Licenses Without a Road Test
    (NYTimes)
    Risk of Misinterpreting Hydrogen Peroxide Indicator Colors for Vapor
    Sterilization: Letter to Health Care Providers (FDA)
    GitHub Takes Aim at Open Source Software Vulnerabilities (WiReD)
    Snake ransomware targeting healthcare now claims to steal unencrypted files
    before encrypting computers on a network (BleepingComputer]
    China's Military Is Tied to Debilitating New Cyberattack Tool (NYTimes) Coronavirus Proves Only Structural Changes Can Avert Climate Apocalypse
    (IEA)
    Which COVID-19 models should we use to make policy decisions? (MedixlXpress) COVID SW model is a steaming pile ... (Whistleblower via Henry Baker)
    German contact-tracing app to be rolled out in mid-June (Politico)
    Digital immunity passport is `the lesser of two evils' (Politico)
    Flu vs. COVID-19 (geoff goodfellow)
    Re: Visualization shows droplets from one cough on an airplane (Amos Shapir) Re: What the Coronavirus Crisis Reveals... (Chris Drewe)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Fri, 8 May 2020 10:55:09 PDT
    From: "Peter G. Neumann" <neumann@csl.sri.com>
    Subject: U.S. government plans to urge states to resist 'high-risk' Internet
    voting (Kim Zetter)

    (Kim Zetter in *The Guardian* [always an incisive reporter. PGN]

    https://www.theguardian.com/us-news/2020/may/08/us-government-internet-voting-department-of-homeland-security

    Added note: *The Guardian has published the entire DHS document. PGN] https://www.scribd.com/document/460491458/CISA-Guidelines-on-Internet-Voting

    ------------------------------

    Date: Fri, 08 May 2020 15:16:59 -0700
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: Trading computer can't handle negative numbers

    I know it's hard to believe after all of the Y2K hoopla, but here we are
    again.

    Trading computer software can't handle negative oil prices; it costs firm at least $100 million.

    Next up: I would imagine that negative interest rates would blow
    U.S. financial customer accounts sky high (EU customers have already seen negative interest rates).

    BTW, square roots crop up in some trading calculations -- e.g., option
    pricing. How long until we read about trading computers blowing up with complex numbers?

    https://www.bloomberg.com/news/articles/2020-05-08/oil-crash-busted-a-broker-s-computers-and-inflicted-huge-losses?srnd=premium

    Matthew Leising Updated 8 May 2020, 12:16 PM EDT
    Oil Crash Busted Broker's Computers and Inflicted Big Losses

    Interactive Brokers users couldn't trade when oil broke zero Incident will
    cost firm more than $100 million, chairman says

    Syed Shah usually buys and sells stocks and currencies through his
    Interactive Brokers account, but he couldn't resist trying his hand at some
    oil trading on April 20, the day prices plunged below zero for the first
    time ever. The day trader, working from his house in a Toronto suburb,
    figured he couldn't lose as he spent $2,400 snapping up crude at $3.30 a barrel, and then 50 cents. Then came what looked like the deal of a
    lifetime: buying 212 futures contracts on West Texas Intermediate for an astonishing penny each.

    What he didn't know was oil's first trip into negative pricing had broken Interactive Brokers Group Inc. Its software couldn't cope with that pesky
    minus sign, even though it was always technically possible -- though this
    was an outlandish idea before the pandemic -- for the crude market to go
    upside down. Crude was actually around negative $3.70 a barrel when Shah's screen had it at 1 cent. Interactive Brokers never displayed a subzero price
    to him as oil kept diving to end the day at minus $37.63 a barrel.

    At midnight, Shah got the devastating news: he owed Interactive Brokers $9 million. He'd started the day with $77,000 in his account.

    "I was in shock," the 30-year-old said in a phone interview. "I felt like everything was going to be taken from me, all my assets."

    To be clear, investors who were long those oil contracts had a brutal day, regardless of what brokerage they had their account in. What set Interactive Brokers apart, though, is that its customers were flying blind, unable to
    see that prices had turned negative, or in other cases locked into their investments and blocked from trading. Compounding the problem, and a big reason why Shah lost an unbelievable amount in a few hours, is that the negative numbers also blew up the model Interactive Brokers used to
    calculate the amount of margin -- aka collateral -- that customers needed to secure their accounts.

    Thomas Peterffy, the chairman and founder of Interactive Brokers, says the journey into negative territory exposed bugs in the company's
    software. "It's a $113 million mistake on our part," the 75-year-old billionaire said in an interview Wednesday. Since then, his firm revised its maximum loss estimate to $109.3 million. It's been a moving target from the start; on April 21, Interactive Brokers figured it was down $88 million from the incident.

    Customers will be made whole, Peterffy said. "We will rebate from our own
    funds to our customers who were locked in with a long position during the
    time the price was negative any losses they suffered below zero."

    That could help Shah. The day trader in Mississauga, Canada, bought his
    first five contracts for $3.30 each at 1:19 p.m. that historic Monday. Over
    the next 40 minutes or so he bought 21 more, the last for 50 cents. He tried
    to put an order in for a negative price, but the Interactive Brokers system rejected it, so he became more convinced that it wasn't possible for oil to
    go below zero. At 2:11 p.m., he placed that dream-turned-nightmare trade at
    a penny.

    It was only later that night that he saw on the news that oil had plunged to the never-before-seen price of negative $37.63 per barrel. What did that
    mean for the hundreds of contracts he'd bought? He frantically tried to contact support at the firm, but no one could help him. Then that late-night statement arrived with a loss so big it was expressed with an exponent.

    The problem wasn't confined to North America. Thousands of miles away, Interactive Brokers customer Manfred Koller ran into trouble similar to what Shah faced. Koller, who lives near Frankfurt and trades from his home
    computer on behalf of two friends, also didn't realize oil prices could go negative.

    He'd bought contracts for his friends on Interactive Brokers that day at $11 and between $4 and $5. Just after 2 p.m. New York time, his trading screen froze. "The price feed went black, there were no bids or offers anymore," he said in an interview. Yet as far as he knew at this point, according to his Interactive Brokers account, he didn't have anything to worry about as
    trading closed for the day.

    Following the carnage, Interactive Brokers sent him notice that he owed $110,000. His friends were completely wiped out. "This is definitely not
    what you want to do, lose all your money in 20 minutes," Koller said.

    Besides locking up because of negative prices, a second issue concerned the amount of money Interactive Brokers required its customers to have on hand
    in order to trade. Known as margin, it's a vital risk measure to ensure
    traders don't lose more than they can afford. For the 212 oil contracts Shah bought for 1 cent each, the broker only required his account to have $30 of margin per contract. It was as if Interactive Brokers thought the potential loss of buying at one cent was one cent, rather than the almost unlimited downside that negative prices imply, he said.

    "It seems like they didn't know it could happen," Shah said.

    But it was known industry-wide that CME Group Inc.'s benchmark oil contracts could go negative. Five days before the mayhem, the owner of the New York Mercantile Exchange, where the trading took place, sent a notice to all its clearing-member firms advising them that they could test their systems using negative prices. "Effective immediately, firms wishing to test such negative futures and/or strike prices in their systems may utilize CME's ‘New
    Release' testing environments" for crude oil, the exchange said.

    Interactive Brokers got that notice, Peterffy said. But he doesn't feel five days was enough time to upgrade his company's trading platform.

    "Five days, including the weekend, with the coronavirus going on and a
    complex system where we have to make many changes, was not a sufficient
    amount of time," he said. "The idea we could have bugs is not, in my mind, a surprise." He also acknowledged the error in the margin model Interactive Brokers used that day.

    According to Peterffy, its customers were long 563 oil contracts on Nymex,
    as well as 2,448 related contracts listed at another company,
    Intercontinental Exchange Inc. Interactive Brokers foresees refunding
    $18,815 for the Nymex ones and $37,630 for ICE's, according to a spokesman.

    To give a sense of how far off the Interactive Brokers margin model was that day, similar trades to what Shah placed would have required $6,930 per trade
    in margin if he placed them at Intercontinental Exchange. That's 231 times
    the $30 Interactive Brokers charged.

    "I realized after the fact the margin for those contracts is very high and these trades should never have been processed," he said. He didn't sleep for three nights after getting the $9 million margin call, he said.

    Peterffy accepted blame, but said there was little market liquidity after prices went negative, which could've prevented customers from exiting their trades anyway. He also laid responsibility on the exchanges and said the company had been in touch with the industry's regulator, the U.S. Commodity Futures Trading Commission.

    "We have called the CFTC and complained bitterly," Peterffy said. "It
    appears the exchanges are going scot-free."

    Representatives of CME and Intercontinental Exchange declined to comment. A CFTC spokesman didn't immediately return a request for comment.

    Peterffy said there's a problem with how exchanges design their contracts because the trading dries up as they near expiration. The May oil futures contract -- the one that went negative -- expired the day after the historic plunge, so most of the market had moved to trading the June contract, which expires May 19 and currently trades around $24 a barrel.

    "That's how it's possible for these contracts to go absolutely crazy and
    close at a price that has no economic justification," Peterffy said. "The
    issue is whose responsibility is this?"

    -- With assistance by Melinda Grenier

    (Adds details of June contract in penultimate paragraph. A previous version
    of this story was corrected because Interactive Brokers gave the wrong estimated refund for the Nymex contracts in the 18th paragraph.)

    ------------------------------

    Date: Fri, 8 May 2020 11:56:54 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: Nearly 20,000 Georgia Teens Are Issued Driver's Licenses Without a
    Road Test (NYTimes)

    Gov. Brian Kemp suspended the requirement that most Georgians pass a behind-the-wheel test when applying for licenses last month.

    https://www.nytimes.com/2020/05/07/us/georgia-teen-driving-test-coronavirus.html

    ------------------------------

    Date: Thu, 7 May 2020 15:21:20 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: Risk of Misinterpreting Hydrogen Peroxide Indicator Colors for
    Vapor Sterilization: Letter to Health Care Providers (FDA)

    The U.S. Food and Drug Administration (FDA) has become aware of the
    potential for health-care facility staff that reprocess and sterilize
    medical devices to misinterpret the indicators used to validate the sterilization of medical devices because there is no standard indicator
    color to indicate a sterilized device. [...]

    https://www.fda.gov/medical-devices/letters-health-care-providers/risk-misinterpreting-hydrogen-peroxide-indicator-colors-vapor-sterilization-letter-health-care

    ------------------------------

    Date: Thu, 7 May 2020 23:58:07 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: GitHub Takes Aim at Open Source Software Vulnerabilities (WiReD)

    GitHub Advanced Security will help automatically spot potential security problems in the world's biggest open source platform.

    https://www.wired.com/story/github-advanced-security-open-source/

    ------------------------------

    Date: Thu, 7 May 2020 13:40:22 -1000
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Snake ransomware targeting healthcare now claims to steal
    unencrypted files before encrypting computers on a network
    (BleepingComputer]

    The operators of the Snake Ransomware have launched a worldwide campaign of cyberattacks that have infected numerous businesses and at least one health care organization over the last few days.

    This past January, BleepingComputer reported on the new Snake ransomware
    that was targeting enterprise networks. <https://www.bleepingcomputer.com/news/security/snake-ransomware-is-the-next-threat-targeting-business-networks/>

    Since then, the ransomware operators have been relatively quiet, with
    little to no new infections being detected in the wild.

    This lack of activity all changed on May 4th, when the ransomware operators conducted a massive campaign that targeted organizations throughout the
    world and across all verticals.

    Starting on May 4th, ransomware identification site, ID Ransomware, showed a massive jump in submissions after seeing a few here and there over the last couple of months.
    <https://id-ransomware.malwarehunterteam.com/>

    According to security reporter Brian Krebs, one of the victims allegedly hit
    by the Snake Ransomware in this campaign is Fresenius Group, Europe's
    largest hospital provider.

    "Fresenius, Europe's largest private hospital operator and a major provider
    of dialysis products and services that are in such high demand thanks to the COVID-19 pandemic, has been hit in a ransomware cyber attack on its
    technology systems. The company said the incident has limited some of its operations, but that patient care continues," Krebs reported. <http://krebsonsecurity.com/2020/05/europes-largest-private-hospital-operator-fresenius-hit-by-ransomware/>

    BleepingComputer has since been able to independently confirm that the Snake Ransomware attacked Fresenius on May 4th.

    This same source told us that numerous other companies were hit, including
    an architectural firm in France and a prepaid debit card company. Snake
    claims to now steal files before encrypting

    *As has now become routine with ransomware, Snake now claims to steal unencrypted files before encrypting computers on a network.*

    *As noted by MalwareHunterTeam, in the ransom note named 'Decrypt-Your-Files.txt' from this week's attacks, the Snake operators have added text stating that they will publish stolen databases and document if
    not paid within 48 hours*. [...] <https://twitter.com/malwrhunterteam/status/1258080951101468673>

    https://www.bleepingcomputer.com/news/security/large-scale-snake-ransomware-campaign-targets-healthcare-more/

    ------------------------------

    Date: Thu, 7 May 2020 08:10:04 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: China's Military Is Tied to Debilitating New Cyberattack Tool
    (NYTimes)

    An Israeli security company said the hacking software, called Aria-body, had been deployed against governments and state-owned companies in Australia and Southeast Asia.

    https://www.nytimes.com/2020/05/07/world/asia/china-hacking-military-aria.html

    ------------------------------

    Date: Wed, 6 May 2020 15:01:33 -1000
    From: the keyboard of geoff goodfellow <geoff@iconia.com>
    Subject: Coronavirus Proves Only Structural Changes Can Avert Climate
    Apocalypse (IEA)

    We are still screwed if we do not permanently alter how we produce and
    consume energy as a civilization

    A new International Energy Agency report warns that while 2020 may see the largest CO2 emissions drop on record because of the coronavirus pandemic,
    there is still cause for concern. <https://www.iea.org/reports/global-energy-review-2020>

    The IEA anticipates carbon emissions will drop almost 8 percent -- six times larger than the previous record caused by the 2008 global financial crisis
    and twice as large as the sum total of every reduction since the end of
    World War II. Global energy demand will fall 6 percent, which is seven
    times larger than the decline from the 2008 global financial crisis and equivalent to losing the entire energy demand of India. Renewables are the
    only energy source expected to see any growth in use (1.5 percent) or generation (3 percent), while oil demand will drop by 9 percent, coal by 8 percent, and natural gas by 5 percent.

    All these numbers are staggering, but they are also inadequate. Despite the
    70 year lows for each of these carbon energy sources and the IEA's
    estimation that 50 percent of all global energy use is exposed to these
    global containment measures, we're far from the reductions needed to avert climate catastrophe. Moreover, these reductions are inequitable and have
    come at a tragic personal cost to many. Structural changes <https://lareviewofbooks.org/article/climate-commonwealth-and-the-green-new-deal-a-conversation-with-alyssa-battistoni-and-jedediah-britton-purdy/>
    (e.g., an internationalist Green New Deal that favors the working class) are necessary if we are to have any hope.

    As Vox's David Roberts writes, limiting climate change to 1.5 degrees
    Celsius -- our only shot at avoiding hundreds of millions of deaths and widespread ecological collapse -- means "emissions would need to fall off a cliff, falling by 15% a year every year, starting in 2020, until they hit zero." In fact, the emissions reduction we are on track to experience may
    yield no durable environmental benefits that last beyond the lockdowns as
    urban pollution, for example, will quickly return. <https://www.vox.com/energy-and-environment/2020/1/3/21045263/climate-change-1-5-degrees-celsius-target-ipcc>
    <https://www.vice.com/en_ca/article/7kzqja/coronavirus-emissions-climate-science>

    This insufficient but historic reduction is thanks to travel restrictions
    and economic lockdowns that have caused spikes in unemployment dwarfing
    those of the Great Recession and approaching Great Depression levels. In the United States alone, a country where nearly half of the population lives paycheck to paycheck, far more than the reported 30 million people have
    likely lost their jobs and a perpetual rent strike is developing as a
    growing plurality of tenants are simply unable to make ends meet. The full human cost of this pandemic has yet to emerge -- its immediate death toll
    may be underreported, but it has obvious for months that the pandemic would make plain that our country views its most vulnerable populations as disposable. [...] <https://www.theatlantic.com/magazine/archive/2016/05/my-secret-shame/476415/> <https://www.businessinsider.com/us-unemployment-likely-higher-than-jobless-claims-show-coronavirus-jobs-2020-5>
    <https://newrepublic.com/article/157462/era-endless-rent-strike> <https://www.nytimes.com/interactive/2020/04/28/us/coronavirus-death-toll-total.html>
    <https://www.vox.com/2020/2/7/21126758/coronavirus-xenophobia-racism-china-asians>
    <http://bostonreview.net/class-inequality-race-politics/shaun-ossei-owusu-coronavirus-and-politics-disposability>
    https://www.vice.com/en_us/article/n7wjwz/coronavirus-proves-only-structural-changes-can-avert-climate-apocalypse

    ------------------------------

    Date: Fri, 8 May 2020 08:53:39 +0900
    From: Dave Farber <farber@gmail.com>
    Subject: Which COVID-19 models should we use to make policy decisions?
    (MedixlXpress)

    https://medicalxpress.com/news/2020-05-covid-policy-decisions.html

    Which COVID-19 models should we use to make policy decisions? Pennsylvania State University <http://www.psu.edu/>

    A new process to evaluate multiple disease models will help identify which intervention measures may be most successful during an outbreak. Shown here, the entry process for students at Lanzhou University in China involves
    scanning a university ID, which is associated with the student's body temperature history, travel history, and other information, while a machine detects current body temperature. Credit: Shouli Li, Lanzhou University With
    so many COVID-19 models being developed, how do policymakers know which ones
    to use? A new process to harness multiple disease models for outbreak management has been developed by an international team of researchers. The
    team describes the process in a paper appearing May 8 in the journal Science and was awarded a Grant for Rapid Response Research (RAPID) from the
    National Science Foundation to immediately implement the process to help
    inform policy decisions for the COVID-19 outbreak.

    During a disease outbreak <https://medicalxpress.com/tags/outbreak/>, many research groups independently generate models, for example projecting how
    the disease will spread, which groups will be impacted most severely, or how implementing a particular management action might affect these
    dynamics. These models help inform public health policy for managing the outbreak.

    "While most models have strong scientific underpinnings, they often differ greatly in their projections and policy recommendation," said Katriona Shea, professor of biology and Alumni Professor in the Biological Sciences, Penn State. "This means that policymakers are forced to rely on consensus when it appears, or on a single trusted source of advice, without confidence that
    their decisions will be the best possible."

    At the onset of an outbreak, particularly for a new disease, a large amount
    of information is often unavailable or unknown, and researchers must make decisions about how to incorporate this uncertainty into their models,
    leading to differing projections. For the COVID-19 outbreak, for example, uncertainty is present in a wide range of areas, from infection rate to
    details of transmission to the capacity of health care systems. The
    designers of each model <https://medicalxpress.com/tags/model/bring their
    own perspective and approach to address these uncertainties.

    A new process to evaluate multiple disease outbreak models will help inform public health policy decisions for managing the outbreak. The process is currently being applied to the current COVID-19 outbreak. Credit: Will
    Probert, University of Oxford "In order to improve modeling and analysis of epidemic disease, it is essential to develop protocols that deliberately generate and evaluate valuable individual ideas from across the modeling community," said Michael Runge, a research ecologist at the U.S. Geological Survey's Patuxent Wildlife Research Center who specializes in decision
    analysis for wildlife management. "We have identified best practices <https://medicalxpress.com/tags/best+practices/that allow the synthesis and evaluation of input from multiple modeling groups in an efficient and timely manner."

    In the three-part process, multiple research groups first create models for specified management scenarios, for example, addressing how caseload would
    be affected if social isolation measures were lifted this summer, or how the duration of the outbreak would change if students return to school in the
    fall. The research groups work independently during this step to encourage a wide range of ideas without prematurely conforming to a certain way of thinking. Then, the modeling groups formally discuss their models with each other -- an important addition to previous multiple model methods -- which allows them to examine why their models might disagree. Finally, the groups work independently again to refine their models, based on the insights from
    the discussion and comparison stage.

    After group discussion and individual model refinement, the models are
    combined into an overall projection for each management strategy, which can
    be used to help guide risk analysis and policy deliberation. At this stage, methods from the field of decision analysis can allow the decision maker,
    for example a public health agency, to understand the merits of different management options in the face of the existing uncertainty.

    Additionally, the combined results can help identify which uncertainty --
    what pieces of missing information -- are most critical to learn about in
    order to improve models and thus improve decision making, providing a way to prioritize research directions.

    "This process allows us to embrace uncertainty, rather than hastening to a premature consensus that could derail or deflect management efforts," said Shea. "The process encourages a healthy conversation between scientists and decision makers, enabling policy agencies to more effectively achieve their management goals."

    Even after initial decisions are made, the process can continue as new information about the outbreak and management becomes available. This
    "adaptive management" strategy can allow researchers to refine their models
    and make new predictions as the outbreak progresses. For COVID-19, this
    process might inform how and when isolation and travel bans are lifted, and
    if these or other measures might be necessary again in the future.

    The research team plans to implement this process immediately for
    COVID-19. By taking advantage of the many research groups already producing models for the current outbreak, the strategy should be easy to implement
    while producing more robust results from the existing process. The team will share results with the U.S. Centers for Disease Control and Prevention as
    they are generated.

    "We hope this process actively feeds into policy for the COVID-19 response
    in the United States," said Shea. "It also provides a framework for future outbreak settings, including emerging diseases and agricultural pest
    species, and management of endemic infectious diseases, including
    vaccination strategies and disease surveillance."

    Explore further

    Models of coronavirus underestimate the epidemic's peak and overestimate its duration <https://medicalxpress.com/news/2020-04-coronavirus-underestimate-epidemic-peak-overestimate.html>

    More information: K. Shea el al., "Harnessing multiple models for outbreak management," Science (2020). <https://science.sciencemag.org/cgi/doi/10.1126/science.abb9934>
    Provided by Pennsylvania State University <https://medicalxpress.com/partners/pennsylvania-state-university/ <http://www.psu.edu/>

    ------------------------------

    Date: Fri, 08 May 2020 10:52:17 -0700
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: COVID SW model is a steaming pile ... (Whistleblower)

    [lockdown item also noted by Steven J. Greenwald. PGN]

    Apparently, Ferguson's COVID computer model, on which basis several trillion-dollar quarantining decisions have been made, is a steaming pile of crap software code.

    This case is a perfect example of why we need fully *open source* computer
    code for any accepted scientific results.

    Briefly, the Ferguson model is a 'Monte Carlo' simulation of a complex networked system which is fed by a pseudo-random number generator ("PRNG")
    to enable the 'Monte Carlo' aspect of the simulation.

    Normally, such a PRNG generates a random number sequence determined by its initial "seed": the sequence is identical if and only if the seed is
    identical. Since the behavior of the model is determined by the random
    number sequence, the behavior of the model is identical if and only if the
    seed is identical.

    Ferguson's model does *not* have this behavior -- it has non-deterministic behavior over and above that introduced by the PRNG -- some due perhaps to
    the non-determinism in the parallel scheduling algorithms. Worse, this non-determinism produces dramatically different results (not entirely unexpected due to the exponential behavior of positive feedback loops).

    What Ferguson has done isn't science, but *witchcraft*. Sometimes the witch doctor produces a correct answer by the miracle of coincidence, but science does not progress by standing on the shoulders of witch doctors.

    With apologies to Max Planck, "science is here progressing funeral by
    needless funeral".

    Trillion dollar decisions cannot be based upon software of this poor
    quality.

    https://lockdownsceptics.org/code-review-of-fergusons-model/

    Code Review of Ferguson's Model
    Sue Denim (not the author's real name)

    Imperial finally released a derivative of Ferguson's code. I figured I'd do
    a review of it and send you some of the things I noticed. I don't know your background so apologies if some of this is pitched at the wrong level.

    My background. I wrote software for 30 years. I worked at Google between
    2006 and 2014, where I was a senior software engineer working on Maps, Gmail and account security. I spent the last five years at a US/UK firm where I designed the company's database product, amongst other jobs and projects. I
    was also an independent consultant for a couple of years. Obviously I'm
    giving only my own professional opinion and not speaking for my current employer.

    The code. It isn't the code Ferguson ran to produce his famous Report
    9. What's been released on GitHub is a heavily modified derivative of it,
    after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility
    and written in C++, whereas the original program was "a single 15,000 line
    file that had been worked on for a decade" (this is considered extremely
    poor practice). A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make
    them release it. Clearly, Imperial are too embarrassed by the state of it
    ever to release it of their own free will, which is unacceptable given that
    it was paid for by the taxpayer and belongs to them.

    https://github.com/mrc-ide/covid-sim

    https://github.com/mrc-ide/covid-sim/issues/144

    The model. What it's doing is best described as "SimCity without the graphics". It attempts to simulate households, schools, offices, people and their movements, etc. I won't go further into the underlying assumptions,
    since that's well explored elsewhere.

    Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is
    unimportant.

    This problem makes the code unusable for scientific purposes, given that a
    key part of the scientific method is the ability to replicate
    results. Without replication, the findings might not be real at all -- as
    the field of psychology has been finding out to its cost. Even if their original code was released, it's apparent that the same numbers as in Report
    9 might not come out of it.

    Non-deterministic outputs may take some explanation, as it's not something anyone previously floated as a possibility.

    The documentation says:

    The model is stochastic. Multiple runs with different seeds should
    be undertaken to see average behaviour.

    "Stochastic" is just a scientific-sounding word for "random". That's not a problem if the randomness is intentional pseudo-randomness, i.e. the
    randomness is derived from a starting "seed" which is iterated to produce
    the random numbers. Such randomness is often used in Monte Carlo
    techniques. It's safe because the seed can be recorded and the same

    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)