• Hi velocity translational skiiing and the beam-people-there Transporter

    From Treon Verdery@21:1/5 to All on Wed Oct 12 09:27:58 2022
    Could phonons, plasmons (arrays of atoms and holes that move; physically translate, while retaining order), or higher velocity spintronic equivalents, propagate information as little plasmonic groups cross edges through a medium, possibly a preservative
    or amplifying medium, to transport the specific layout of atoms, and just possibly some of their quantum states to a Reader like a 3d embroidery hoop or coating surrounding the object. The plasmons migrating might have error-reduction bitwise operations
    similar to a cellular automata to preclude data dilution with propagation out to the plasmon reader coating.

    I hard about something called spintronics; does spintronics have a plasmon/phonon equivalent? Spintronics could be much higher velocity than electron hole plasmons.

    Plasmon/phonon data reporting transporter Applications such as knowing what is inside a crystal could possibly go with new radiation, particle, and vibration (like Thz), EM, detectors to make new kinds of sensors. New kinds of sensors benefits robots and
    automation.

    It is well travelled at things I write, but could the new scientist quantum camera that makes a figurine outline from quantum-entangled photon absorption at the figurine surface simulataneously at the sensing surface of the computer camera to make an
    image of the figure without a direct/transmission/reflected optical path be combined with a plamonic/phonic crystal to create new kinds of detectors. Combining quantum camera with plamonics creates new detectors and depth vision. The phonons would
    migrate until they reached an edge or a crystal anomaly (from the crystal detecting something), then on reaching a crystal novelty-center would change and cause their different sibling quantum-linked phonon or plasmon, which was at the surface of a
    reader/sensor, like a computer camera (at the figurine example), to be specifically quantum polarized (possible spin polarized) from entanglement; that images the 3d shape as well as likely the energy level and form of the thing the first plasmon reacts
    to. This is a way of seeing at depth of materials, possibly 3d computer chips, or even cytomaterials or tissues as when you put a nanosized plasmon generating crystal next to a cyte, then the nanocrystal makes propagating phonons/plamons which wash up
    and possibly penetrate the cytomembrane and cytostructure, while what they see/react to is recorded from the quantum camera effect.

    One vague idea I have about reading plamonics/phonons at a perimeter or edge is that little or large organic molecules could have a plasmonic representation; say the edge of a quiltlike plasmon just touches the edge of a little organic molecule, like a
    carbohydrate. The quilt hops around the carbohydrate molecule, Near-touching, plasmonically/phononically modifying the plamon-quilt, without reacting chemically with the carbohydrate. So rather than an atomic force microscope tip, the plasmonic quilt
    changes its internal plasmon matrix (possibly an actively computing matrix like a cellular automata) at each of the c-c-c of the carbohydrate. The carbohydrate goes unreacted but it is read/stored plasmonically. Just for niftyness: I read about a time
    crystal at wikipedia, sort of a crystal with more than one stable ground state so it automatically rotates through states; so a plasmonic/phononic time crystal could iterate the matrix-quilt while transporting measurements to a quilt-internal or external
    computer/sensor.


    Math of statistics and finding things out: A human looking at a map of US states and counties can tell which are the richest. Then, as a lay perceptor, I think humans do actual math correlations/other equations and/or just gaze at the way overlain data
    sets fit. People looking at previously uncombined data sets sometimes find new (perceived) trends which can then be tested as hypothesis.

    database comparisons as instantaneous statements of matrice data are reminiscent of finding actual predictive relationships at data, that could be tested, or have highly unique probability of occuring other than by chance (like a P value, but better and
    nonspecific); So looking at the counties you could predict college attendance by county wealth, even if you were absent a theory as t why.

    So that brings up the math of what is the minimal matrix or block size to make a 2d map overlain on an integer dataset. Like, how good a tic-tac-toe board or hexagonal tile plane do you need to get a semivisual guide to data 1) where a human glancing at
    the visual it would see a trend (fMRI) 2) where an AI, like a deep learning AI, would imitate a human glancing and find a trend, and 3) where some actual math at an actual formula would find a trend (like multimodality or even east-west gradient at tic
    tac toe board) parsimoniously from less data.

    So are there entire areas of the observable universe that fulfill the math of hypothesisless true correlation? These might be an area of science, and the technology that comes from it, that are particularly easy and effective to investigate. This brings
    up a new (undecidability notation)form of D3 island of truthiness; an physics and other science actual area separate from deduction or induction.

    My perception is that some of science, like physics, uses reduction (simplification) to produce predictable, modellable components, like electrons or photons or math-fields, then build up larger things from these; the esteem goes to the theories that
    most effectively build up models that accurately predict the observed universe. That process reminds me of a combination of induction and deduction. That said if math areas of hypothesisless correlation create islands of truthiness (D3) completely
    outside and different than induction and/or deduction then there could be a restatement of physics, and new physics research, based on math areas of hypothesisless truthiness. The only one (math;hypothesisless;truthiness) that I think of instantly is the
    dubious (yet possibly testable): Math winnowing of anthropic principle variants at a multiverse kind of set-theory implies: If you perceive you exist, then it must be at a physics that permits that.

    Now testability matters, notably at core, as there is not way to tell if an actual existing system is constructed in part with a non-hypothesisless math component. Keep doing the science experiments.

    Math entertainment:
    Math Description of a universe where correlation is always causation; then finding areas at our universe-we-live-in that have or approximate that mathematical set-up.

    Looking at a map of US counties colored on wealth, then comparing any other thing to it might often generate testable hypothesis. That is normal during 2019 AD. Are there truth regions possible without a hypothesis? Areas of validity without doing a
    subsequent experiment based on measuring the new hypothesis? That would be really different, and kind of like finding (math/data) places of accurate knowledge (at undecidebility notation a new kind of D3 local-truth island); The thing is, could there be
    a math description of a simple matrix, like a tic tac toe board, where to have two matrices compared the statistics-equations would always have to be true, that is like doing the math the P value would always definitionally be p=0.0. At such a math space
    if you saw any data trend with your human vision it would always be certain, rather than competitive with chance, and would always be true; Interestingly no hypothesis is necessary to then make a statement about the system, “Rich counties have more
    plastic cards” or something. Also, even if the math works, it might not be a hypothesisless certainty because the Godel incompleteness theorem and the Quoran’s statement that addition and multiplication are math unprovable as to repeatability, so
    only for certain assumptions of mathematics would hypothesisless “true” correlations exist.

    It would be great to find or make big datasets, with the math permitted hypothesisless correlations which were mathematical-space definitionally true. Those equations derived from the data could then be used predictively at what might be entirely new
    datasets with the same math-matrix-true math organization (or of course you could just process the first 10% of a big dataset. find the “hypothesisless truthiness effects” and then isolate those “truelike” areas or data of interest at the other
    90% of the data set.

    Could sensors be built around hypothesisless matrix-true math; these physical sensors could then make something like an image (like self-driving car video) where if you can predict it, with any, of a group of equations, it is true.

    Could there be a new kind of statistical process control based on a math of hypothesisles true correlation spaces?

    Encouraging new physics technologies and hypothesis; Could big physics, like colliders, have a math-built-in setup so at experiments constructed a certain way, if it was measured, you could know it exists, as compared with making a dozen (gravity waves)
    to a few billion (lasers) measurements to compare/contrast to stochastics of chance. “Well, we constructed the new physics experiment out of two tic-tac-toe boards; overlain, if we see anything, with computers or our human vision, it’s mathematically
    supported as an actual effect. I think an engineer might point out that the math of your gravity wave detector might be tight but an oversize load truck driving above it could instantly make the plurality of measurements, at a preexisting math of
    unlikely-to-be-chance, the better guiding math.

    MWI:
    Can a person define or generate a multiverse universe with the math of predictive certainty, absent generating and verifying a hypthesis, with a defined or undefined future? That could make “create once, run well, terminate predictably and well”
    universes,with or without sentience. Can sentience be created in a math-space universe where if it can be thought, it is true (hypothesisless certainty); this provides benefit to the inhabitants as they are always right, about everyting, no matter what
    they think. Noting nonrepeating cellular automata from simple things like the 1,1,0 guide, such that at a: if you can think it, it’s true; form of universe, it could/would still be continually generating fresh previously unknown beneficial material.
    That gives what we will call “people” or “humans” the ability to always be right at thoughts, yet be nondetermined and “unclockworklike” at their perceived universe.


    “People looking at previously uncombined data sets sometimes find new (perceived) trends which can then be tested as hypothesis.”

    Ok, so how is hypthesisless truthiness different than the “=“ symbol in an equation? Also, are there non-turing cellular automata, different than 1,1,0 (Wolfram company) turing automata, that create D3 regions of hypothesisless truthiness? These non-
    turing-computers that produce regions of truth are ways of finding something out, yet, are computer-science expanding and alluring because they are non-turing machines.

    If you embed a non-turing cellular automata inside another non-turing automata, do you get a self-editing system that can edit itself to produce an output rather than using a loop? These could be Instant-math (as compared with iterating) problem solvers.
    One possibility (like perturbing one tiny branchlet of a fractal with the preference of changing the macrostructure of the entire fractal) is to rewrite the surrounding automata1 with the average, or mode, of the internal nested (embedded) automata2.
    That causes the most frequent output (mode) of automata2 to completely reshape automata1 which its subset automata2 generates. I somehow think that if you rewrite the surrounding structure with the mode of automata2 that this differes from computer-
    science recursion. So I think the idea generally is to do computing without a turing machine, and to change big things with tiny modifications at a branch.

    Ok, just to do some quora-based reacting, it seems like the vast majority of things at the observable universe that get measured have multi-item multicomponent parts. There is even the notion that the normal distribution (frequent at things) is sourced
    from basic combinatorics. So when physicists and others make an effort to describe a newly measured or theorized system like the multiverse, is presuming things like “a flat universe, or a round universe” missing out as a result of simplification (
    simplify to build constructables from)? Is there a math and theory legitimate way to note, previously inductively*, that middle complexity is the usual thing, so a testable middle complexity theory could have value? Big computers, deep learning and AI
    could possibly start with nearer to middle complexity new models of new physics measurements, technologies, and new physics theories and then isolate their heightened predictive ability from a middle-complexity theory source. “instead of radiating from
    a point source, an origin universe radiates from multiple sources, metaphorically similar to a distribution gaussian of Boltzmann brains” as a comic yet nifty version. As I am writing things for online publishing It is my duty to be accurate and
    earnest, yet I am amused with the idea that boltzmann brains, each with their own anthropic principle zone, expand towards each other, then the ones that get along together persist, causing peace in the multiverse.

    *I note “previously inductively” because (repeat measurments -> induction featuring math improvements) yet if mathematically constructable hypothesisless correlators (where correlation, at those specific systems, from the parsimoniousness of the math,
    possibly matrix-math causes correlation to always = causation at that definitional space) can create math spaces in utilized physics, theoretical physics, and technologized objects, then those new things are outside induction and deduction; that seems
    new to me and could be useful.

    Possible reply: because FEA (finite element analysis) works: the aggregates of minimal descriptions actually are predictive. Being predictive they have value. If you presume middle-complexity, modellability from parts (even FEA or theories built around
    collections of definitional minimal sized forms); still, middle complexity forms could have value and/or be better predictors. think of some zig-zaggy squares mde with pinking shears; you can tesselate those, so metaphorically middle-complexity object-
    chunks can have predictive capacity at various fabrics, including theories.
    I can see how AI can improve physics and the technologies that arise from it.

    Is there a middle complexity math thing (thus possibly a physics thing) that you could not find from testing or aggregting mini-components, that actually exists and influences the human experienced universe? Say the idea of a Gaussian, absent the actual
    collection of points. It does have predictive power, but you might not every posit its existence based on a series of individual measurments of theory components.

    Another area of middle complexity might be (creating/explaining) emergent properties, although, minimized component definitions are soemthing I am writing about with a tropism towards expanding beyond them, it would be nifty if emergent properties,
    rather than just generating a catalog of new emergent properties, possibly even one generated at an AI that makes vast quantities of permitted shapes “while you were modelling a spherical point, the AI just said “pringle”! or “Hey, non-orientable
    surface!” producing a big catalog of new emergent properties people could then use. The AI or software is producing a whole bunch of “middle complexity” data primitives as compared with building up from components.


    So what physical systems could be hypothesisless? At a plonkish level, I am reminded of the idea that if you just make things out of atoms, then the math-biasing of those systems, and (truthiness-at-matter) math regions where correlation always=causation,
    might be automatic.

    Everettian many worlds, as I perceive it, comes from making the math function at the schroedinger equation. So it is at least a noniductive math-space. Even though things like the schroedinger equation only predict they hydrogen atom (20th century AD
    version). I once thought, among other possibilities, that since MWI depends on the schroedinger equation, and the schrodinger equation, during the 20th century AD, predicts the hydrogen atom that perhaps MWI universes might be generated composed only of
    those things that the scroedinger equation can actually predict; so at a minimum, entire new universes made up only of hydrogen and space, which if I imitate 20th century physics, then have a gravity thing where they become stellar objects. So every new
    universe would be a “diffuse bang” that then did the predicted physics things to form stuff.
    Just thought I would type that for entertainment.

    I do not have any idea about the truth of the Many Worlds Interpretation of physics (MWI) but I have thought of approaches and technologies to test the MWI.


    Possible reply: because FEA (finite element analysis) works; aggregte minimal descriptions actually are predictive, and if you presume middle-complexity, modellability from parts (even FEA or theories built around collections of definitional minimal
    sized forms)

    Although big computers might be able to do something with: middle-complexity as being the new models that are then used with first-principles to generate new hypothesis, new physics and new technology. I am reminded of what I perceive are called Platonic
    forms; comparatively rich things that have nonreducible meaning. It is possible some of the middle complexity forms that the software finds will go well with human cognition styles, causing actual humans to make new theories, hypotheses, and technologies.

    Lets say a human hears from a computer, “If you do not know where you are, you are likely in the middle” so then the human thinks, “perhaps then I am a mid-sized sentient organism; could there be tinier sentient organisms than me?”

    Then the human devises an experiment to look for something like extraterrestrial life not in outer space, but in tinier regions of existing human space. They find out that the immune system is capable of not just doing what the body says, but is capable
    of learning (brief or lengthy) programs of its own, and even generating new programs. Not necessarily a communicateable-with-sentience yet novel to me.

    Encouragingly for the middle-sizeists the volume of the human immune system is within an order of magnitude of the volume of the human brain. If it is possible that the number of recombinations of an immune system is larger than the number of neurons at
    a human brain then there is a, “no emergence yet noted” computational basis for something as effective as a human brain. Perhaps the immune system is a P-zombie! Experiments are then thought of to ask the immune system if it is sentient, or just a
    computer that writes its own programs.

    So, the middle-complexity forms could have value generating new science and technology.

    (
    dfiuhuihf an MWI moment.
    edit that to: GTKG
    )

    Delayed quantum choice eraser (DQCE) notes as prompted with reading the wikipedia article https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser#Other_delayed-choice_quantum-eraser_experiments on DQCE:

    “pointed out that when these assumptions are applied to a device of interstellar dimensions, a last-minute decision made on Earth on how to observe a photon could alter a decision made millions or even billions of years ago” That suggests some MWI
    technology like observing a more human-beneficial anthropic pinciple prior to theorized coalescence of the human starting planetary system. Although, I have been told by a time traveller, JY, the human experience world, possibly the universe was created.

    More wikipedia: “While delayed-choice experiments have confirmed the seeming ability of measurements made on photons in the present to alter events occurring in the past, this requires a non-standard view of quantum mechanics. If a photon in flight is
    interpreted as being in a so-called "superposition of states", i.e. if it is interpreted as something that has the potentiality to manifest as a particle or wave, but during its time in flight is neither, then there is no time paradox. This is the
    standard view, and recent experiments have supported it.” So that brings up set theory, and the possible definitions of an unexamined set. There’s structure, like “there are set elements”, or a “set without elements”. As well as things of
    unknown validity I have glimpsed at Quora, like “photons are a field”, so does a field, being a set element, have an effect, possibly testable with a new DQCE experiment, on retrocausal action? I have a feeling it is well described at physics, but
    the math description of the field, that when observed, expresses itslef as a photon, is unknown to me. Does it have any mutually exclusive parts even though it is a field? Perhaps since it precedes wave-nature it absent doing node and antinode things,
    but has a shape like a U (more likely to be actualized as a photon ner its emission point or near something like an atom that gloms it) with the anisotropy of the distal parts of the U having a DQCE testability or effect. Is there anything about a photon
    field that has a separable quantizable (quantum) state that is separate from/different than the quantum states (even spectral levels) of the photon. For different photons to have different energies yet all be considered to be a field, that suggests there
    is some, potentially readable (detectable), component that supports/comprises wavelength or frequency. Can you observe just a frequency of a photon field without observing the photon, and do DQCE experiment with that observation?

    A quoran says, of photon fields, “Electric charges interact with each other exchanging photons with energies proportional to their frequencies.” which suggests that if as it says, EM fields are based on photon influence which differs from absorption/
    re-emission, then maybe the charge of an electron wobbles a little if you partially observe the photon field that it’s EM is composed of, or just perhaps a system that would ordinarily be just short of the energy to jump an electron up a quantum level
    to make a (detectable) photon emission can get extra energy from a partial photon-field observation (perhaps something hinting at wavelength). Thus showing that some enquiry short of producing an actual photon, at the photonic field can effect other
    matter and/or energy. That field effect could then be used at a new DQCE retrocausality experiment where the photon field has (or perhaps has not!) quantizable pre-photon characteristics.

    Another Quoran writes, “I think the answer is there in pair production and annihilation along with other hard scientific evidence such as the Einstein-de Haas effct. We can make an electron and a positron out of photons in gamma-gamma pair production.
    Then we can diffract electrons and positrons. and then when we annihilate them, we get the photons back. So the electron and the positron are in themselves a configuration of the photon field.” That seems to suggest it is possible to make many
    varieties of action, mass. or activity at a photon field. It is possible some of those numerous varied photon field possibilities could be used to clarify the “has the potentiality to manifest as a particle or wave, but during its time in flight is “
    neither” “ thing that says “neither” is absent retrocausality. Perhaps some specialized versions of photon fields support retrocausality directly as they are absent “neither”ness.

    Those retrocausal custom “neither”less photon-fields could then be measured as to their possible (potentially valuable) chronological isolation, if any, from the global experience of time. Also, these “neither”less photon fields could perhaps be
    found at nature. If findable at nature then they could be observed at a far distance, thus beneficially changing the human inhabited universe retrocausally.

    Are photon fields near matter nearly always particle-or wave-ized, that might make the retrocausality of the DQCE the usual effect at matter or near matter as most photons around humans are also near matter or electrons. Is it rare that a photon-field
    would go preobserved/unobserved? Also, what about the thing where it says electrons communicate/share effect via a photon field? Is that electron distance so minute that it is absent photon-field determinacy, suggesting DQCE works differently on electron
    systems as compared with atom systems?

    What if you retrocausally modfy an atom (a neither-less DQCE occurence) yet the atom has electrons? Do you get blended, optimizable, results? Can the blend be customized to produce beneficial new technologies like chronological insulators or amplifiers?
    Like: Nucleus retrocausal at DQCE, electrons: resolved as particle compared with: Nucleus retrocausal at DCQE, electrons resolved as wave; do they have different DQCE retrocausal effects or produce engineerable time technologies?

    Retrocausality might be stronger with more massive objects: Also, as to DQCE retrocausality going with wave or particle ness rather than “neither”ness Are there any atomic particles or other things (even macroscopic quantum things I have read about;
    like 2mm at 2018 AD) That always have either waveness or particleness. I think a blob of matter 3x the size of the double slit’s separation distance is likely, statistically, usually a prticle, although there is a real finite chance it will wave its
    way through the double slit. The thing is though, does the preponderance of one state cause DCQE retrocauslity prevalence notably at blobs of matter? i doubt it is a ratio, but what if DCQE retrocausality is like the ratio (probability distribution) of
    macroscopic waveness or particleness : “neitherness” . That doubtful ratio thing would cause macroscopic blobs of matter to be more retrocausally affectible.

    What if you use a preobserved blob of matter, so that you already have it described as a wave or particle before it meets the DQCE apparatus? It still gets its wave/particle opportunity, again, at the DQCE, yet you know what you have utilized at it. I am
    kind of being extraploative, but one of the 1999 AD DQCE things is observing the system later, to change a photon’s path retrocausally; Could either omitting or recording the state, or at a subsequent perhaps after-experiment moment, viewing the matter-
    blob’s first definitional wave/particleness have an effect on the DQCE part of the apparatus? If the idea is to observe it first, to prevent “neitherness” causing actual retrocausal effects, does omitting a record of the form-producing pre DCQE
    observation have an effect?


    What if the DQCE with photons or diffractable/particleizable matter, does the initial measurement/characterization of the matter-blob as one of its available retrocausal actions; this is possibly a constructible time-feedback technology.

    Another Quoran says, “the energy, momentum and angular momentum associated with a specific excitation of the photon field vanishes, while at the same time the properties of a corresponding excitation of the electron field (which we perceive as an
    electron) change. “
    So from an MWI perspective are there universe differences based on these possibly isolatable elements: “the energy, momentum and angular momentum associated with a specific excitation of the photon field…[excites an electron]” so the MWI universe
    from an electron doing an emissions level attainment nd photon emission event could possibly be customized with modifying angular momentum of the energizing photon; or possibly doing some impressive laser thing where the photon re-emission ocurs at some
    externally guided momentum or angular momentum, or spin. Like I read you can direct spin with magnets or lasers, so re-emitting a photon in a magnetic field could effect the MWI universe generated. The quote mentions three things so there are three
    factorial variations, that could affect the new MWI universe, from each photon action. I previously write about nesting MWI universes; it is possible that linking these three items (energy, momentum and angular momentum) at photons and/or between atoms
    could also cause two MWI-universe events to depend on each other, possibly at different time scales, producing contingent, connected, and/or chronologically related new MWI universes.

    Silver is an element that conducts electrons at two orbital levels simultaneously whereas I perceive with many other elements it is just one “external” orbital. Silver could have a wider possibly variety of MWI universe creation technologies as a
    result of its two electron conduction and/or quantum level effects. Nesting or contingencies at MWI universes could also be effected, possibly improved, or new forms generated, at silver systems.

    There is a the slight possibility that silver used at new or previously described MWI verification/refutation/verification tests could heighten MWI test effectiveness. At one of a few previously described “wobble” MWI tests it is possible that the
    two electron system might produce a different amount of “wobble”, So a frequently mentioned “wobble” test where you have a about a billion locations (a billion is kind of like flashdrive electron tunneling plenum volume) on a chip made with IC
    technology, Then energize it with electrons or zap it with a laser, then find out if adjacent locations at the array do something novel, possibly from energy saturation/desaturation (note littler than electron possibility as well)

    Planck length thing: If an electron event produces an MWI universe does that suggest that some measure, littler than an electron, would be beneficial at MWI tests? Possibly a novel, different than prediction, quantum level photon emission. If the MWI
    event causes saturation (greater local/initial universe energy)then the emissions spectra line might go up (because the thing is still working off one electron, so it could be beneficial to create a functional measurable that measures “wobble” that
    is compatible with a one elctron change. At desaturation the “wobble” would cause the emissions spectra line to be lesser than the norm. Also I read about a thing that might be called a planck length or planck volume. Is there anything measurable
    that occupies less planck volume than many other things, which, planck volume used, might change from MWI universe creation “wobble” So like, if a photon or electron “occupies” a particular planck volume, does some expansion or shrinkage event
    from “wobble” cause the planck volume to change? I do not know of a way to measure change in the planck volume of a photon or electron. Unless the thing where you have like a few hundred quantum entangled (linked) photons directed at one photon, or
    electron, or optimally a particle/wave of size: one planck length; Then change just one of the hundreds of linked photons thus causing a fractional effect at the one planck length/volume particle or photon system that is multipley entangled, then at that
    smaller than planck-length technology it is possible that the resistance to the change at (ease of observability or change in ease of observability of the one of hundreds entangled photon) observing one of several hundred observable photons the entangled/
    linked photons could then measure “wobble”


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)