• Delayed Quantum Choice Eraser (1/2)

    From Kay Lie@21:1/5 to All on Sun Jul 16 06:46:51 2023
    tipjar glyph at search results. Benefits Bing, larger number of text words and 1% of tips to passthrough page (a search engine, a youtube or porn video site, facebook, twitter, etc.)
    scholar.google.com

    Every <3 able or likeable or up arrow votable Meme, or facebook and other social networking item and update becomes a DoMoney transfer point.

    Prior art: Reddit lets people get a few hundred award coins for $5.99, the ratio of views to award coins suggests about $1-5 of award coins are given per 100K views of things like really nice pencil shavings on reddit.

    For people that think there hegemons: having all the browsers putting the US dollar, or something names dollar on every right click supports the dollar Hegemon, which might be meaningful to those that have perspectives on Euro and Yaun hegemons

    Javascript function library: Any thumbs up, heart, tipjar (new unicode glyph), email address

    W3; puts it at all browsers globally
    XML: <Tipjar_goes_to> Treon Verdery </tipjar>

    7.3 billion, 1% 73 million global contetn producers
    credit union ethical, compare microcredit


    IDA: real estate is a category; does that include investment real estate; also things that mature after I’m 62 and there is an absence of SSI penaly for earnings and OVer $2000 cash gatherings, like christmas tree farms, timberland, REITs(ask), real
    estate partnerships; this blends over into LLC theory. dubious as anything: minute income from vending machine partnerships that goes up at 62, when, hypothetically, SSI is replaced by regular social security.
    LLCinvests in corporate junk paper; is that permitted; does autoreinvestment of dividends, or rather at corporate junk paper,autoaquisition of more junk paper through LLC, that then gets turned to money after 62; at LLC its fine to reinvest all revenues
    in LLC growth rather than earning money;

    The 11-29 ebay singles or triples products, done by other people, then picked up by the LLC after they are proven to work; renting out the ebay singles or triple product groups, kind of like a franchise; franchisees omit competing, but can pick up new
    ebay singles or triples (likely to be better than earlier success!) after they test an idea for a year. Note: opportunity cost of 100-900 hours of ebay is 1800 omitted technologies new to me, that I think benefit humans, if I produce a new technology
    each 1/2 hour.

    Noting IDA, being parsimonious with time rather than $ makes sense; fiverr; others do fulfillment; people on fiverr get comission if they get someone to teast an ebay product free for a month and year (!)
    With 24-72 ebay product then product triples make 8-24 ebay testers these could be found at fiverr for upfront fiverr fee plus commission. 10% of ebay products perform strongly, so that’s 3-7 very high performing ebay products the LLC can have fiverr
    fulfillment people do.

    What can be done in 1 hour? 11 hours?
    Find 11 ebay products in 11 hours, maybe (top ten most active at every category of alibaba indexed to activity at ebay)
    3-4 hours explain to fiverr people on webpage what I want them to do to get upfront fee + comission for finding people to test Ebay triples complimentary for a year (my simultaneous use permitted)

    2 hours: order an actual alibaba product, can be a longevity chemical or drug, and get it shipped cheapest way to verify this works. Avoid express mail margin reducing shipping, but if 100 or 400 STI $11 ebay tests are $40 to ship (1 Kg), then maybe it
    is OK toship express from china.


    2-4 hours per month per comissioned fiverr person;
    fiverr upfront fee + comission: big with native english speakers, or, people who tutor english as a second language; spot ESTJs, ENTJs through language;

    fiverr bid specifier, “Here’s the very-high volume of reponses upfront fee item “virtual assistant category”, Excel work could winnow for especially competent, diligent people, “do you have more than 700K views at a website? share the URL”,
    ; Take the kiersey/MBTI! (find ETJ)

    If the virtual assistants from 3-5 of the 5 most wallet-returning countries and the US, on earth go well, then could repeat formula with:
    estonia, slovakia, Russia, Taiwan, Japan, Egypt,

    Does Swedish, German, Russian or Chinese fiverr exist? place english-text-only ad there. Also, similar virtaul assistant (????) as at international Swedish, German, russian, beijing craigslist.

    Theory: people that do things are good at stuff, so find someone busy and make them busier: STEM; solicit ST, especially CS ethical bulk emailers, also especially Engineers, M services people at fiverr to get upfront fee + comission for finding ebay
    franchisees.

    The phrase “virtual assistant conferences” appears online; advertising fiverr opportunity to those people might get particulalrly well organized and motivated virtual assistants

    3-11 hours: the webpage that expains the fiverr upfront fee + comission finding of ebay franchisees can also be used, perhaps rapidly, at

    Out of country might = no hassle; so, do upfront fee and comission fiverr people seeking Scandinavian Franchisees (great English skills too)
    Estonian, slovakian Franchisees. In fact, online there is a “most honest government country list”; advertise to top 10 ( switzerland Norway, netherlands, Denmark, sweden, Poland, Czech republic, New Zealand, Germany, France”

    On a different test: “Taking top spot for the most honest nation is the UK”, Japan got1/2-1/2 (?)
    of those and USA, especially STEM people.

    One thing I like about your sending the flubber on an optical bench is that whether looking at it like [wjt] version, or [2fries] version, when you (at the world I seem to sense) make a knot out of foam; it falls through itself, but is leaves a
    anisotropic record in the bubbles it it made out of. (if you overextend, don't get it, miss out and think its bubbles).

    So someone really motivated could try to make up a flubber that retained anisotopy, after some path event or especially *testable* optical lab bench event, even if it is like math anistropy) after passing through itself, or being Knotted* -or- otherwise*
    topologically subjected to change

    anyway if the thing being tested for was completely novel, and they found it, then light and matter would have some new(!) attribute. That attribute could be awesomely and usefully technologized. [wjt] would just casually say things like if you put two
    "hall of mirrors" facing each other, put the flubber between them, the usual dimming you perceive doesn't lead to actual flubber wearing out or extinction, at any hypthetical possible rereflection, however dim, it still is flubber.exists.on

    That reminds of Feynman either having a theory or writing about the idea in physics that there is just one electron, but it happens to be everyplace in different amounts.

    Aside: delayed quantum choice eraser, even though I'm very ignorant, seems to make it so there's a future of a photon, and a past of a photon, so that might work against having been reminded of the "there's one electron" idea.

    I'm kind of feeling uninsightful, so I just translated [wjt's] new thing into hackneyed old physics metaphors and I think [wjt] is looking for something awesomer.

    Like two flubbers, or

    Here's one:
    Transverse waves have more tricks they can do than longitudinal waves, Like they can have polarization, when the other isn't a big enough math continer/physics container to support polarization. perhaps at 4D "flubber-space", or -mere- 3D+T "flubber-
    space" there are "travelling things that are equationable extensions on the math series 1.compression_wave, 2.transverse.wave 3.moomin/ocean_swell 4.flubber_travelling_thing; ---> At groovy books like Gamow's 123...infinity they make a point, extend
    it to a line, make a square, make a cube, then make a hypercube, all using only simple mathematical extension of the previous thing. So I just extended the idea of wave from 1->2->3->4 with [wjt]'s flubber as extension 4

    flubber travelling_thing as extension 4 of (the W word)
    could be mathematically destined to do more tricks than wave.2 and wave.3 It might have entirely new attributes, just like the way wave.2 is the first to support polarization. So anyway at flubber.Travelling_thing search for experimentally, entirely
    new places to get nifty effects or even store data. (like you can store data with polarization)

    I'm massively ignorant, but I heard of bell's inequality, and how one of the simplest demonstrations is three polarizing filters doing something like "retransparency", so at wave.3.Moomin_ocean_lump there might be more nifty wave characteristics, and
    perhaps Bells inequality has some different way of being stated, a novel, maybe even meaningful bifurcation of forms, or some kind of new data implications.

    flubber travelling_thing.4 might have, not only more nifty characteristics (like 2,4,8 completely different than polarization, but progressed new *lab testable* attributes) it might have a Bell's inequality effect, absence of effect, or some other kind
    of thing with each of the 2,4,8 new attributes that go with flubber.travelling_thing.4

    Like as another question, a simplifier might say: ok, so you need an attribute depth of at least transverse waves to have polarization, does that mean that Bell's inequality is nonapplicable to the too-simple-for-polrization compression.wave.1 stuff, or
    is Bell's inequality there too? Does it do something "really honking big" because there's just a lot of simplicity going on at compression.wave.1

    If there is a "really honking big" Bell's inequality thing at kinds of waves (like compression.wave.1) too simple for polarization, what is it? Can you make a technology out of it? Does someone at the halfbakery know what it is already called? What's
    it called?

    Previous material at this annotation:

    if you add another 4th spatial dimension then perhaps there's a new kind of "traveling thing" that has even more tricks than a transverse wave.

    So like:
    "travelling thing.4" -> math says it can do 2, 4, or 8 more things than a 3 spatial dimension wave. They do not have names yet. Fourier representation unknown (but likely!)

    transverse wave: polarization, solitons, fourier representation

    compression waves: no polarization, solitons, fourier representation

    note:
    *I head of about 3D+time as 4 dimensions, but when they do 4 spatial dimension as mathematicians, the math knots simply fall through themselves and can't be tied. I do not know their names, but I think I read there are stable 4D+T math options where
    some kind of 4Dspace+time arrangements or loopy things or something (a step above, and complete alternative to, a knot) ahve "absence of automatic untying/fall through, unlike a 4D knot; I do not know what the 3D projection of a 4D "lasts like a knot,
    but differs from a knot" thing is, but perhaps they can be printed with 3D printers or certainly viewed on a computer or with VR goggles.

    What the math of 4 spatial Dimension "stable like a knot, even though it's different" has to do with [wjt]'s idea is that --->Is there anything [wjt]'s flubber can do as a shape or form that produces durability, chirality, stability, or (startlingly)
    like a flubber.4 popsicle stick exploder, sudden energy release? These could all be technologized.

    Plural overlapping delayed quantum choice eraser lab-bench paths might actually make such: stable, durable (and potentially new observables at) things, or popsicle stick sudden energy release things out of [wjt]'s flubber. Or, as I'm having fun with it
    flubber.travelling_thing.4

    so one weirdly practical thing about the size of the delayed quantum choice erasers (at the actual world I am told I sense 3D+t) (my own actual experience is that the world I sense is 3D+paranormal jungian synchronization +t) volume is that it is made of
    optical components, which if they were disrupted (fluorine on the mirror zaps all the electron-sea of the metal layers) have an actual minimum size of function in picometers.

    So if you change from say lenses made of some crystal that is like 40 picometers on an edge to one that is (C300 crystal fullerene lens, or even some massive 100,000 atom transparent protein crystal lens) 30 picometers to (at the protein, 1 nanometer) on
    an edge, then the size of the "EM region", "arranged orbitals" and other stuff around has either:
    1) less than the minimum size to effect time, that is, as a delayed quantum choice eraser component it's too small for "linear chronological progression" at the experiment to be spanned by it
    2) is a span of picometers to a nanometer in which time is different
    3) is bigger than the size sufficient to "do" chronological linear progression

    Another way besides protein lenses and mirrors and optics to make a giant delayed quantum choice eraser:

    In really atom sparse areas I hear there is a thing called a rydberg atom. Perhaps with the electron(s) like 10-20 cm from the nucleus. Theoretically you could make a lens, a mirror, an emitter, a detector a beamsplitter, all the parts of a delayed
    quantum choice eraser technology object out of giant e- orbital diameter very sparse atoms. Then the different chronological and causality possibilities present at each optical element would span meters. So you would have a Meters(!) big "minimum
    functional, most parsimonious area" for a time anomaly technology (the time anomaly technology is: the delayed quantum choice eraser)

    Aside: if any of you are good at math, I have read support of retrocausality at the delayed quantum choice eraser, someone also published a refutation, so the experts are saying different things. Another physicist says that it isn't retrocausal, but in
    their words, "heralds" material/data. What is the current state of the art on the delayed quantum choice eraser?

    As a really nifty thing, and I think it could actually work, a genetic algorithm could do millions of plural delayed quantum choice eraser designs, see what the physics software said about them and come up with two bins of output: Bin 1) Those delayed
    quantum choice eraser series / parallel / branched / feedback composition variations which have the least predictable physics

    and bin 2) Those delayed quantum choice eraser series / parallel / branched / feedback / evanescent wave actual physical optical bench designs with the very largest amount of retrocausality, or if that one physicists "heralding" carries the day, the
    largest amount of accurate future prediction.

    It sounds a little goofy, but actually doing genetic algorithms on the delayed quantum choice eraser is just making a million models of the math of some emitters, lenses, reflectors, (importantly, light pathways; including hypotenuses, and XYZ axis
    possible beampaths increases options) and detectors, testing them, recombining the physical components they make reference to, generating plural variations, and winnowing again. It's a wonderful use for a workstation or massively parallel internet CPU
    time.

    ANother delayed quantum choice eraser experiment is finding out if systems that support transverse.wave.1 and transverse wave.1 but not actual light.photons can do the same exact path as Emitter, beamsplitter, lenses mirrors with say water waves bouncing
    round a science museum's physics tank, standing waves in plasma, or say xyz actuators (like physical motion from lasertweezers or laser tractor beams, but not the photonic component) wiggling a transparent actual (PMMA?) jello made of atoms. (like really,
    make the whole thing out of a physical 3D gel that supports 3d+t form.moomin.ocean_lump.3 passage and reflection and splitting and detection of 3D ...as [wjt] calls the new version flubber, but I just sense the (W word) coming on.

    So those are some kinds of
    -thick- delayed quantum choice eraser a person could stick sensors on, and do stuff with (especially the XYZ axis plurally interconnected serial / parallel / branching / evanescent wave / almost babbage-machine like NAND gate(s) / soliton (100-10,000
    times signal durability) Genetic algorithm produced version of delayed quantum choice eraser) That could be made, tested, learned from and technologized into new technologies.

    Having the genetic algorithm utilize the NAND gate "form" of the delayed quantum choice eraser (perhaps at lab bench version parallel paths or rejoining branches after retrocausality-causing observations they feed together to do a NAND operation) is
    because I read you can make any other logic primitive out of NAND gates, and can make a functionalike duplicate of any CPU/GPU logic circuit with only nand gates. So if the genetic algorithm uses NAND gate delayed quantum choice eraser at its iterations,
    winnowings, genrated output, and bins of things people want, then a delayed quantum choice eraser retrocausal (or heralding) computer could be one of them (bin 3)

    I'm enthused about other people's annotations about [wjt]'s idea, that said this is a little interesting:
    Someone who actually knows math and computer science could look at the minimum size of a computer.

    Now, excitingly there is non-turning computation as well as other self-sufficient architectures than turing (confusedly: harvard architecture?). So, at all the known classes of self sufficent computing architectures, if you are allowed to send, at a
    semiconductor embodiment, 1, 2, or even 3 electrons backwards in time. repeatedly, Or (physicist: "heralding") 1,2,or 3 electrons forwards in time; or perhaps just "inspected for value" without work or cycles, which among those possible computers have
    nifty new areas of actual utility, so they can be technologized.

    As a tremendously pragmitic thing about the delayed choice quantum eraser, they could see if repeated use, saturates it, increases it, or wears it to anisotropic output. My perception of the delayed choice quantum eraser is that they run it and get a
    statistal picture of 100K photons or so. Now, based on [wjt]'s flubber does the delayed choice quantum eraser variously wear a deep rut or groove, does it saturate a matter electron or photon system to failure (or, more ncely said, change). If you turn
    on a plasm cathose and anode for a 100,000 atom measuremetn there is no acessory effect. If you run it for 8760 hours you notice the anode and cathose weight different amounts, and the glass on the vacuum apparatus has an obvious metal coating. Even,
    comically, at humans, if You take 1 million xenon flash photos of me in 72 hours I start to get a tan, and my hair would bleach blonde in the UV light. Running delayed choice quantum erasers 8760 hours (year) continuously, compared with a 1)nonobserved
    optical bench duplicate; 2)running at high voltages and currents at the laser diode such that 99% of laser diodes would be expected to fail in 8760 hours, Using laser diodes of such high wattage they are expected to deform the lenses mirrors and optics
    of the optical bench' light path such that they no longer provide measureable output to the photon detectors. 3) Using lasers, not necessarily diodes, that make such minute wavelength (Like extreme UV) 4) runningthe whole thing with x-ray optics and
    like a dental x-ray source 5) running the whole delayed choice quantum eraser off y-radiation, like photons from Cobalt 60 through a slit, and awesome (x=ray observation satellite instantiated) impressive narrow-glance-angle x-ray optics then see if the
    amount of radioactivity generated or something else about it changes with delayed choice quantum erasure observation, again, over sufficient data collection time that the machine actual wears out (so you can see the analogous to weight change cathodes
    and anodes, metal plated glass, not-yet-explainable changes in the refractivity of the optics (refractive index change), or, at mirrors and beamsplitters the AFM view of billiard-ball racks of atoms that are differently terraced/terracing-than expected
    mirror surfaces 6)have the alternate path the photons have to have retrocausally taken have things that disintegrate with radiation while allowing it to pass, 7)have a thoughtful optics person divide the delayed choice quantum eraser into sections, so
    that you can co utilize (components of) the optical path, but get a different photon path out of it at a different frequency; such things as a dichroic mirror, a spectrum-and-slot roygbiv prism that sends different color photons down different optical
    paths, evanescent wave bandgap effect/forbidden zone perturbation:a couple prisms, just a nudge apart that are great as delayed choice quantum eraser optics, but at a different frequency of radiation cause an obvious and directed evanescent wave; leave
    the evanescent wave detector on all 8760 hours, 8)The nifty thing is I have heard about what is called optical bench on a chip. If you can make a complete delayed choice quantum eraser with optics/emitter/detector on an IC, then you can make millions or
    even a billion of them on a 300mm process wafer. That allows you to make the million to billion

    One way to amplify the chronological novelty, and measure it to technologize it is having a deep learning neural net utilize the finding of 100,000 optical bench on a chip (IC fab technique optics) pathway novelty variations that have the greatest amount
    of retrocausality or physicist,"heralding" at delayed choice quantum eraser optical assembleges/statements/demonstrators.

    Ok, so, you found the extreme ones, then you pass them to a genetic algorithm and a neural network.

    Using the 10,000 most chronologically unusual or also intense embodiments it does neural network learning, and suggests new ones. A billion of these New Chronological novelty effect intensified embodiment forms are made on another wafer and tested;more
    data is gathered, repeat.

    During that time of course people are doing actual thought and design around what they have learned from the million or billion delayed choice quantum erasers automatically tested, and I am even suggesting chemical science characterization of any change
    to crystalline or amorphous form, (finding those anomalous effects that are *analogous* to mass-change electrodes and metal sputters at say a machine that's busy doing something completely different like being a plasma advertising decoration or a
    cyclotron ion source) It just is kinf of sensible that the humans, while also doing and auto-IC-fabbing a pure math software guided production of delayed choice quantum eraser multiplexes (and alternates) That they also make batches of optical bench on a
    chip ICs and wafers combining their architectures with those suggested by the genetic algorithm and the neural network.

    9) clock frequency; observer frequency; beat frequency;
    10) Can you stick a photomultiplier crystal/tube on every stage of it, and "zero detectable energy wobble) get orders of magnitude more retrocausal or "heralding" photons out of it without geting any other energy wobbles. If you photomultiplier crystal
    or tube it up to a Quadillion times more photons. The internet says for photons of some frequency about 11 of them is 2*10-25 joules, so if you photomultiplied a sparse photon source like a 10,000 photon/second source quadrillions of times more moving
    photon energy could be produced and still be a nonmelting 1-10 Joule detector event.

    The thing is though, that if you look for wobble or something unexpected with the photomultiplier crystal/tube, preceding each component of the entire delayed choice quantum eraser pathway, you might find something that was anamolous. That's really
    nifty because of the possibility of making technologies from the results.

    Another rather weird thing you could do with a delyed choice quantum eraser is to strengthen it's signal with a "stoachastic" amplifier. I read that at an image below the threshold of computer/human perception, there's some way to add stochastic signal (
    TV snow) to it, to raise it above the threshold of detectability to actually resolve an image. There are many places in the delayed choice quantum eraser to add stochastic photons. So, what happens to the stochastic resolution heightening photons, and
    their photon sources when the DCQE (Delayed Choice Quantum Eraser) does the retrocausel/"heralding" path variation?

    note
    Now, this is also where doing Genetic algorithm elaborations and winnowings of (that is of course if an electron can be sent back in time (retrocausal or "heralding") (DCQE is one approach among 4.5 possible ways to do that which cross my mind)

    Efficient vegetarian sushi: sushi, sometimes little rolls, sometimes artful pop-in-the-mouth piles; If you chew and gulfit you get the flavor; What if they made sushi that was measured to purposefully be a length that caused one more bite per roll or
    assemblage (say from 1-2, to 2-3, or even 4) in unsupervised vegetarian sushi eaters. One thing that might do this is layout, like it could have a 2 or 3 artful indentations in it/T\/T\/T\, say three bands of seaweed equispaced so it looks like there
    are more rubberbands remaining to hold it together if you only bite 1/3 of it off. and, awesomely, if you get the idea your supposed to bite off on the edge of an axial ==))==))== band, lasers could, while leaving the things structurally strong enough to
    pick up, have cake-cutter-combed it into snapping off easily just at that spot.

    epigenetics of human genes that imitate the epigenetics of hibernators hibernating protein receptros could be longevity drugs, or even cardiocasculat benefit drugs: hibernating bear plasma causes survival from cardiac ischemia to go from 30% to 80%
    inmodel mammals; The receptors at the bear and the 50 percentage points more rescues lab mammals, for those plasma fractions, which may actually be isolated named chemicals (proteins) already. could also be receptors at humans. Changing the epigenetics
    of those existing human receptors tomake themmuch more receptive could cause resistance to harm from heart attacks and stroke (iscmeia), notably at people with heriditory history of heart attack or stroke

    natural product makes epigenetic modifier to make bear plasma fraction receptors more receptive

    Nootropics and speaking birds, like Parrots, and possibly crows. They could test a variety of known nootropics on parrots and other talking birds (crows, mynah birds?) to see if they learned more words from bird language teachning software that likely
    already exists, but used to be vinyl records people would play for their talking birds to teach them words; So for example they could find out if phenylpiracetam causes 34% larger vocabulary gain after 1 month of talking bird “educational” software.
    Then they could test new nooropics and especiallynootropic peptides and proteins like klotho variants, a library of C7-C20 omega 3 fatty acids, (C16 is DHA), and mass fractionated brain, such as the nootropic cerebrolysin, to isolate the particular
    peptides (and proteins from factionated brain extract that are nootropic) Also, noting that the mass of the brain that knows 3000 words(gray parrot), and the mass of a bird brain that can use tools (crows) is sort of like 6 grams, so if a human’s
    brain is more than 300 times larger it is possible that being able to use tools and speak 3000 words at 6-12 grams of brain mass (2 birds combined)

    Among 40 crow species, one species is the most cognitively rich and measurably cognitively capable, which one? Gently and humanely, with animal well being awareness utilize that species of crow for nootropics experiments and characterization, and
    improvement, including feeding (enteric coated nanosomal to deliver protein at the GI tract) or possibly injecting crows, with mass fractionated, electrophoretic (protein and peptide fractions) of crow or other bird brains to see if, like pig
    cerebrolysin, any of the crow/parrot brain fractions are notably nootropic to crows and speaking birds; Then also feed/inject rats and marmosets with protein/peptide (mass fraction/electrophoretic) concentrate from crow or also parrot or also macaw
    brains to see if their cognitve function is enhanced; It is possible the amounts and labwork could be easier if tested with ostritch or Emu brain (which might or might not be 20 times higher in volume, and still nootropic; I perceive EMus and ostirches
    are agricultural animals in Australia); If the proteins are found to be nootropic they can be made synthetically with bacterial protein production or even just tissue culture of bird brains for further concentration.

    Human volunteers could be measured as to the effect of bird brain protein/peptides as nootropics, notably with enteric nanosomal delivery, and, if they are nootropics, technologized and made with D-amino acids so the nootropic proteins and peptides go
    undigested by enzymes (generally, the same changes to Insulin that have produced oral insulin can be applied to toehr peptide and protein drugs like crow brain based nootropics)

    Some things, such as college education at humans, may improve cognitive fluency even though g (like IQ) doesn’t change much from education, and in is published as having a tropism to a biologically determined amount (imaginably, monozygotic twins,
    educated differently have g (like IQ) score convergence on a similar value.

    Longevity benefits of college education: college education is associated with greater longevity at humans, I do not remember, but this may be true even when things like income ($), and other things are accounted for; I perceive I may have read that it is
    possible that cognitive enrichment contributes to longevity; thus, some nootropics, notably some more than others, or some with specific neurons or brain regions (neocortex) of action, or even external nonchemical measures of nootropic effect (say, the
    nootropics that happen to, whilemaking people more cognitively capable, also heighten social life 7-20% could also be longevity drugs;

    They could test the 80 most popular nootropics as longevity drugs at zerbrafish in 96 well plates and at rats (I read rats have more complex cognition than mice), multiplexing them; So, each rodent on 4 nootropics, 20 separate experiments, 8 mice per
    experiment to get a p-value, 200 multiplexed and also some undrugged rodents to screen a large fraction of the published and manufactured 2021 and on nootropic chemical space for longevity benefits: At 49 cents/mouse/24 hours (WSU animal facility 2005ish)
    , and 200 mice that is about $100/24 hours, or approximately $145,000 to support the mice for 4 (yay!) years of longevity study.


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)