• Middle Complexity and Hypothesisless Correlation

    From Lina Dash@21:1/5 to All on Sun Jul 16 08:04:02 2023
    Hi velocity translational skiiing and the beam-people-there Transporter. Could phonons, plasmons (arrays of atoms and holes that move; physically translate, while retaining order), or higher velocity spintronic equivalents, propagate information as
    little plasmonic groups cross edges through a medium, possibly a preservative or amplifying medium, to transport the specific layout of atoms, and just possibly some of their quantum states to a Reader like a 3d embroidery hoop or coating surrounding the
    object. The plasmons migrating might have error-reduction bitwise operations similar to a cellular automata to preclude data dilution with propagation out to the plasmon reader coating.

    I hard about something called spintronics; does spintronics have a plasmon/phonon equivalent? Spintronics could be much higher velocity than electron hole plasmons.

    Plasmon/phonon data reporting transporter Applications such as knowing what is inside a crystal could possibly go with new radiation, particle, and vibration (like Thz), EM, detectors to make new kinds of sensors. New kinds of sensors benefits robots and
    automation.

    It is well travelled at things I write, but could the new scientist quantum camera that makes a figurine outline from quantum-entangled photon absorption at the figurine surface simulataneously at the sensing surface of the computer camera to make an
    image of the figure without a direct/transmission/reflected optical path be combined with a plamonic/phonic crystal to create new kinds of detectors. Combining quantum camera with plamonics creates new detectors and depth vision. The phonons would
    migrate until they reached an edge or a crystal anomaly (from the crystal detecting something), then on reaching a crystal novelty-center would change and cause their different sibling quantum-linked phonon or plasmon, which was at the surface of a
    reader/sensor, like a computer camera (at the figurine example), to be specifically quantum polarized (possible spin polarized) from entanglement; that images the 3d shape as well as likely the energy level and form of the thing the first plasmon reacts
    to. This is a way of seeing at depth of materials, possibly 3d computer chips, or even cytomaterials or tissues as when you put a nanosized plasmon generating crystal next to a cyte, then the nanocrystal makes propagating phonons/plamons which wash up
    and possibly penetrate the cytomembrane and cytostructure, while what they see/react to is recorded from the quantum camera effect.

    One vague idea I have about reading plamonics/phonons at a perimeter or edge is that little or large organic molecules could have a plasmonic representation; say the edge of a quiltlike plasmon just touches the edge of a little organic molecule, like a
    carbohydrate. The quilt hops around the carbohydrate molecule, Near-touching, plasmonically/phononically modifying the plamon-quilt, without reacting chemically with the carbohydrate. So rather than an atomic force microscope tip, the plasmonic quilt
    changes its internal plasmon matrix (possibly an actively computing matrix like a cellular automata) at each of the c-c-c of the carbohydrate. The carbohydrate goes unreacted but it is read/stored plasmonically. Just for niftyness: I read about a time
    crystal at wikipedia, sort of a crystal with more than one stable ground state so it automatically rotates through states; so a plasmonic/phononic time crystal could iterate the matrix-quilt while transporting measurements to a quilt-internal or external
    computer/sensor.


    Math of statistics and finding things out: A human looking at a map of US states and counties can tell which are the richest. Then, as a lay perceptor, I think humans do actual math correlations/other equations and/or just gaze at the way overlain data
    sets fit. People looking at previously uncombined data sets sometimes find new (perceived) trends which can then be tested as hypothesis.

    database comparisons as instantaneous statements of matrice data are reminiscent of finding actual predictive relationships at data, that could be tested, or have highly unique probability of occuring other than by chance (like a P value, but better and
    nonspecific); So looking at the counties you could predict college attendance by county wealth, even if you were absent a theory as t why.

    So that brings up the math of what is the minimal matrix or block size to make a 2d map overlain on an integer dataset. Like, how good a tic-tac-toe board or hexagonal tile plane do you need to get a semivisual guide to data 1) where a human glancing at
    the visual it would see a trend (fMRI) 2) where an AI, like a deep learning AI, would imitate a human glancing and find a trend, and 3) where some actual math at an actual formula would find a trend (like multimodality or even east-west gradient at tic
    tac toe board) parsimoniously from less data.

    So are there entire areas of the observable universe that fulfill the math of hypothesisless true correlation? These might be an area of science, and the technology that comes from it, that are particularly easy and effective to investigate. This brings
    up a new (undecidability notation)form of D3 island of truthiness; an physics and other science actual area separate from deduction or induction.

    My perception is that some of science, like physics, uses reduction (simplification) to produce predictable, modellable components, like electrons or photons or math-fields, then build up larger things from these; the esteem goes to the theories that
    most effectively build up models that accurately predict the observed universe. That process reminds me of a combination of induction and deduction. That said if math areas of hypothesisless correlation create islands of truthiness (D3) completely
    outside and different than induction and/or deduction then there could be a restatement of physics, and new physics research, based on math areas of hypothesisless truthiness. The only one (math;hypothesisless;truthiness) that I think of instantly is
    the dubious (yet possibly testable): Math winnowing of anthropic principle variants at a multiverse kind of set-theory implies: If you perceive you exist, then it must be at a physics that permits that.

    Now testability matters, notably at core, as there is not way to tell if an actual existing system is constructed in part with a non-hypothesisless math component. Keep doing the science experiments.

    Math entertainment:
    Math Description of a universe where correlation is always causation; then finding areas at our universe-we-live-in that have or approximate that mathematical set-up.

    Looking at a map of US counties colored on wealth, then comparing any other thing to it might often generate testable hypothesis. That is normal during 2019 AD. Are there truth regions possible without a hypothesis? Areas of validity without doing a
    subsequent experiment based on measuring the new hypothesis? That would be really different, and kind of like finding (math/data) places of accurate knowledge (at undecidebility notation a new kind of D3 local-truth island); The thing is, could there be
    a math description of a simple matrix, like a tic tac toe board, where to have two matrices compared the statistics-equations would always have to be true, that is like doing the math the P value would always definitionally be p=0.0. At such a math
    space if you saw any data trend with your human vision it would always be certain, rather than competitive with chance, and would always be true; Interestingly no hypothesis is necessary to then make a statement about the system, “Rich counties have
    more plastic cards” or something. Also, even if the math works, it might not be a hypothesisless certainty because the Godel incompleteness theorem and the Quoran’s statement that addition and multiplication are math unprovable as to repeatability,
    so only for certain assumptions of mathematics would hypothesisless “true” correlations exist.

    It would be great to find or make big datasets, with the math permitted hypothesisless correlations which were mathematical-space definitionally true. Those equations derived from the data could then be used predictively at what might be entirely new
    datasets with the same math-matrix-true math organization (or of course you could just process the first 10% of a big dataset. find the “hypothesisless truthiness effects” and then isolate those “truelike” areas or data of interest at the other
    90% of the data set.

    Could sensors be built around hypothesisless matrix-true math; these physical sensors could then make something like an image (like self-driving car video) where if you can predict it, with any, of a group of equations, it is true.

    Could there be a new kind of statistical process control based on a math of hypothesisles true correlation spaces?

    Encouraging new physics technologies and hypothesis; Could big physics, like colliders, have a math-built-in setup so at experiments constructed a certain way, if it was measured, you could know it exists, as compared with making a dozen (gravity waves)
    to a few billion (lasers) measurements to compare/contrast to stochastics of chance. “Well, we constructed the new physics experiment out of two tic-tac-toe boards; overlain, if we see anything, with computers or our human vision, it’s mathematically
    supported as an actual effect. I think an engineer might point out that the math of your gravity wave detector might be tight but an oversize load truck driving above it could instantly make the plurality of measurements, at a preexisting math of
    unlikely-to-be-chance, the better guiding math.

    MWI:
    Can a person define or generate a multiverse universe with the math of predictive certainty, absent generating and verifying a hypthesis, with a defined or undefined future? That could make “create once, run well, terminate predictably and well”
    universes,with or without sentience. Can sentience be created in a math-space universe where if it can be thought, it is true (hypothesisless certainty); this provides benefit to the inhabitants as they are always right, about everyting, no matter what
    they think. Noting nonrepeating cellular automata from simple things like the 1,1,0 guide, such that at a: if you can think it, it’s true; form of universe, it could/would still be continually generating fresh previously unknown beneficial material.
    That gives what we will call “people” or “humans” the ability to always be right at thoughts, yet be nondetermined and “unclockworklike” at their perceived universe.


    “People looking at previously uncombined data sets sometimes find new (perceived) trends which can then be tested as hypothesis.”

    Ok, so how is hypthesisless truthiness different than the “=“ symbol in an equation? Also, are there non-turing cellular automata, different than 1,1,0 (Wolfram company) turing automata, that create D3 regions of hypothesisless truthiness? These
    non-turing-computers that produce regions of truth are ways of finding something out, yet, are computer-science expanding and alluring because they are non-turing machines.

    If you embed a non-turing cellular automata inside another non-turing automata, do you get a self-editing system that can edit itself to produce an output rather than using a loop? These could be Instant-math (as compared with iterating) problem solvers.
    One possibility (like perturbing one tiny branchlet of a fractal with the preference of changing the macrostructure of the entire fractal) is to rewrite the surrounding automata1 with the average, or mode, of the internal nested (embedded) automata2.
    That causes the most frequent output (mode) of automata2 to completely reshape automata1 which its subset automata2 generates. I somehow think that if you rewrite the surrounding structure with the mode of automata2 that this differes from computer-
    science recursion. So I think the idea generally is to do computing without a turing machine, and to change big things with tiny modifications at a branch.

    Ok, just to do some quora-based reacting, it seems like the vast majority of things at the observable universe that get measured have multi-item multicomponent parts. There is even the notion that the normal distribution (frequent at things) is sourced
    from basic combinatorics. So when physicists and others make an effort to describe a newly measured or theorized system like the multiverse, is presuming things like “a flat universe, or a round universe” missing out as a result of simplification (
    simplify to build constructables from)? Is there a math and theory legitimate way to note, previously inductively*, that middle complexity is the usual thing, so a testable middle complexity theory could have value? Big computers, deep learning and AI
    could possibly start with nearer to middle complexity new models of new physics measurements, technologies, and new physics theories and then isolate their heightened predictive ability from a middle-complexity theory source. “instead of radiating from
    a point source, an origin universe radiates from multiple sources, metaphorically similar to a distribution gaussian of Boltzmann brains” as a comic yet nifty version. As I am writing things for online publishing It is my duty to be accurate and
    earnest, yet I am amused with the idea that boltzmann brains, each with their own anthropic principle zone, expand towards each other, then the ones that get along together persist, causing peace in the multiverse.

    *I note “previously inductively” because (repeat measurments -> induction featuring math improvements) yet if mathematically constructable hypothesisless correlators (where correlation, at those specific systems, from the parsimoniousness of the math,
    possibly matrix-math causes correlation to always = causation at that definitional space) can create math spaces in utilized physics, theoretical physics, and technologized objects, then those new things are outside induction and deduction; that seems
    new to me and could be useful.

    Possible reply: because FEA (finite element analysis) works: the aggregates of minimal descriptions actually are predictive. Being predictive they have value. If you presume middle-complexity, modellability from parts (even FEA or theories built around
    collections of definitional minimal sized forms); still, middle complexity forms could have value and/or be better predictors. think of some zig-zaggy squares mde with pinking shears; you can tesselate those, so metaphorically middle-complexity object-
    chunks can have predictive capacity at various fabrics, including theories.
    I can see how AI can improve physics and the technologies that arise from it.

    Is there a middle complexity math thing (thus possibly a physics thing) that you could not find from testing or aggregting mini-components, that actually exists and influences the human experienced universe? Say the idea of a Gaussian, absent the actual
    collection of points. It does have predictive power, but you might not every posit its existence based on a series of individual measurments of theory components.

    Another area of middle complexity might be (creating/explaining) emergent properties, although, minimized component definitions are soemthing I am writing about with a tropism towards expanding beyond them, it would be nifty if emergent properties,
    rather than just generating a catalog of new emergent properties, possibly even one generated at an AI that makes vast quantities of permitted shapes “while you were modelling a spherical point, the AI just said “pringle”! or “Hey, non-orientable
    surface!” producing a big catalog of new emergent properties people could then use. The AI or software is producing a whole bunch of “middle complexity” data primitives as compared with building up from components.


    So what physical systems could be hypothesisless? At a plonkish level, I am reminded of the idea that if you just make things out of atoms, then the math-biasing of those systems, and (truthiness-at-matter) math regions where correlation always=causation,
    might be automatic.

    Everettian many worlds, as I perceive it, comes from making the math function at the schroedinger equation. So it is at least a noniductive math-space. Even though things like the schroedinger equation only predict they hydrogen atom (20th century AD
    version). I once thought, among other possibilities, that since MWI depends on the schroedinger equation, and the schrodinger equation, during the 20th century AD, predicts the hydrogen atom that perhaps MWI universes might be generated composed only of
    those things that the scroedinger equation can actually predict; so at a minimum, entire new universes made up only of hydrogen and space, which if I imitate 20th century physics, then have a gravity thing where they become stellar objects. So every
    new universe would be a “diffuse bang” that then did the predicted physics things to form stuff.
    Just thought I would type that for entertainment.

    I do not have any idea about the truth of the Many Worlds Interpretation of physics (MWI) but I have thought of approaches and technologies to test the MWI.


    Possible reply: because FEA (finite element analysis) works; aggregte minimal descriptions actually are predictive, and if you presume middle-complexity, modellability from parts (even FEA or theories built around collections of definitional minimal
    sized forms)

    Although big computers might be able to do something with: middle-complexity as being the new models that are then used with first-principles to generate new hypothesis, new physics and new technology. I am reminded of what I perceive are called Platonic
    forms; comparatively rich things that have nonreducible meaning. It is possible some of the middle complexity forms that the software finds will go well with human cognition styles, causing actual humans to make new theories, hypotheses, and
    technologies.

    Lets say a human hears from a computer, “If you do not know where you are, you are likely in the middle” so then the human thinks, “perhaps then I am a mid-sized sentient organism; could there be tinier sentient organisms than me?”

    Then the human devises an experiment to look for something like extraterrestrial life not in outer space, but in tinier regions of existing human space. They find out that the immune system is capable of not just doing what the body says, but is capable
    of learning (brief or lengthy) programs of its own, and even generating new programs. Not necessarily a communicateable-with-sentience yet novel to me.

    Encouragingly for the middle-sizeists the volume of the human immune system is within an order of magnitude of the volume of the human brain. If it is possible that the number of recombinations of an immune system is larger than the number of neurons at
    a human brain then there is a, “no emergence yet noted” computational basis for something as effective as a human brain. Perhaps the immune system is a P-zombie! Experiments are then thought of to ask the immune system if it is sentient, or just a
    computer that writes its own programs.

    So, the middle-complexity forms could have value generating new science and technology.

    (
    dfiuhuihf an MWI moment.
    edit that to: GTKG
    )

    Delayed quantum choice eraser (DQCE) notes as prompted with reading the wikipedia article
    engineering, chemistry, computer ic, computer fab, longevity, longevity technology, treon, treon verdery, physics, lasers, laser, emiconductor, dimension, math, IT, IL, pattern resonance, time travel, chronotechnology, circile, eric the circle, cartoon,
    healthspan, youthspan, cpi, manufacturing, fiscal, money, software, petroleum, archive at deviantart com user treonsebastia

    All technologies, ideas, and inventions of Treon Sebastian Verdery are public domain at JUly 8,2023AD and previously, as well as after that date

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)