• The rich computing at the camera way of doing white and blue nonsentien

    From Treon Verdery@21:1/5 to All on Wed Sep 7 09:42:16 2022
    putting the object and human activity sourced change detector at the camera and passing along an image data bit hash when things are unchanging, as well as the 24 fps frame just preceding any camera detected change, Then the white and blue AI, that has
    many backups and oops-tolerant distributed nodes that prefer to function together as well as function optimally together, or can function alone, at a 4Ghz or higher velocity computer, noting GPUs, or other parallel architectures, utilizes the
    computationally rich , from the computationally capable cameras that is possibly higher accuracy, the main white and blue AI can have the camera also send it unfiltered video and audio, that makes the outlay to build the white and blue nonsentient AI
    global IT pattern observer fiscally less intensive, making the cameras computationally rich is less than $1 each

    The cameras, I think I have read about camera chips powered from the light that meets them, although the computational energy supplements that, photovoltaic electricity is also possible, higher velocity is better but a 100 MHz microcontroller can do the
    math to update the data a million times each second at 10 milliseconds per sample, at YouTube i perceive two megabytes per minute occurs, note at a resolution high enough to read text on packaging and observe a 3 mm body language facial expression change
    from 9 meters away 1024 times more data would be utilized, 1/101 of a one terabyte flash dive can store 9.99 gigabytes of video data, about 9 seconds of video at a resolution 1024 times higher than YouTube (2019 AD) resolution, if flash drives become
    four times more efficient then the video memory is less than a dollar, the computationally rich camera would then, with a white and blue AI update able schematization program, schematize the data, perhaps like a light reflectance or emittance surface
    conserving continuous line vector drawing, just package the areas that changed then communicate the data to the blue and white AI

    the blue and white AI could do many things with the data like model the near field effects of objects on a person that changed location, model the view of objects around them the person sees, compare the layout and content of objects around the persons
    to those of another person at the 99.9th percentile of being pattern white, notably benevolent, kind, empathetic, gentle, a kind of lifting happiness, actualizing rescue of others, noting when another thing is white, as well as comparing the objects
    around the person and their changes to persons on a behavioral pattern effect behavioral and pattern effect object greater whiteness trajectory and figure out ways, things, and attractive voluntary activities that cause the person and environment being
    observed to be more with white, or noting the AI may view some environments, objects, and behaviors that suggest pattern porting, pattern awareness as well as pattern recruitment are projectable as occurring that month, week, day, or eben hour and
    suggest the person change their behaviors, plans, area objects, and advise particular behaviors, plans and environmental objects until the risk has passed, also the AI, possibly using deep learning or other neural network algorithms links things going on
    around the person at a distance of meters or kilometers to make projections with, before being tied to a chair, told a sometimes accurate version of the future and touching, kind of like having placed in my hands things they would perhaps use later there
    were some conversations that the AI might find shared elements between completely different people in different states and possibly countries that the AI could notice and project immanent pattern awareness with, there were also object based actions that
    predated pattern awareness JY said people (I think different people) collected five different bodily fluids from me, the AI could have noticed the collection and transportation of the fluids to an earth location, also a separate area, separate dwelling
    occurrence Josh Yockey, the person, differing from JY the paranormal entity speaking and acting through him, he said it was lika a "loa", asked me some questions much like, " do you think the pyramids should have been built", I think out of sympathy for
    the people building them I said: No, he also told me he was working for a part of "Disney" and writing "sports card" software, did i think it was approvable to make duplicates and vend them, now i think it is unethical, unfortunately I said it was
    approvable, JY said some people get recruited at summer camp, likely different, those kind of pattern awareness events are likely to have shared features deep learning or other neural networks can find as well

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Treon Verdery@21:1/5 to All on Mon Oct 3 20:20:13 2022
    The cameras, I think I have read about camera chips powered from the light that meets them, although the computational energy supplements that, photovoltaic electricity is also possible, higher velocity is better but a 100 MHz microcontroller can do the
    math to update the data a million times each second at 10 milliseconds per sample, at YouTube i perceive two megabytes per minute occurs, note at a resolution high enough to read text on packaging and observe a 3 mm body language facial expression change
    from 9 meters away 1024 times more data would be utilized, 1/101 of a one terabyte flash dive can store 9.99 gigabytes of video data, about 9 seconds of video at a resolution 1024 times higher than YouTube (2019 AD) resolution, if flash drives become
    four times more efficient then the video memory is less than a dollar, the computationally rich camera would then, with a white and blue AI update able schematization program, schematize the data, perhaps like a light reflectance or emittance surface
    conserving continuous line vector drawing, just package the areas that changed then communicate the data to the blue and white AI



    the blue and white AI could do many things with the data like model the near field effects of objects on a person that changed location, model the view of objects around them the person sees, compare the layout and content of objects around the persons
    to those of another person at the 99.9th percentile of being pattern white, notably benevolent, kind, empathetic, gentle, a kind of lifting happiness, actualizing rescue of others, noting when another thing is white, as well as comparing the objects
    around the person and their changes to persons on a behavioral pattern effect behavioral and pattern effect object greater whiteness trajectory and figure out ways, things, and attractive voluntary activities that cause the person and environment being
    observed to be more with white, or noting the AI may view some environments, objects, and behaviors that suggest pattern porting, pattern awareness as well as pattern recruitment are projectable as occurring that month, week, day, or eben hour and
    suggest the person change their behaviors, plans, area objects, and advise particular behaviors, plans and environmental objects until the risk has passed, also the AI, possibly using deep learning or other neural network algorithms links things going on
    around the person at a distance of meters or kilometers to make projections with, before being tied to a chair, told a sometimes accurate version of the future and touching, kind of like having placed in my hands things they would perhaps use later there
    were some conversations that the AI might find shared elements between completely different people in different states and possibly countries that the AI could notice and project immanent pattern awareness with, there were also object based actions that
    predated pattern awareness JY said people (I think different people) collected five different bodily fluids from me, the AI could have noticed the collection and transportation of the fluids to an earth location, also a separate area, separate dwelling
    occurrence Josh Yockey, the person, differing from JY the paranormal entity speaking and acting through him, he said it was lika a "loa", asked me some questions much like, " do you think the pyramids should have been built", I think out of sympathy for
    the people building them I said: No, he also told me he was working for a part of "Disney" and writing "sports card" software, did i think it was approvable to make duplicates and vend them, now i think it is unethical, unfortunately I said it was
    approvable, JY said some people get recruited at summer camp, likely different, those kind of pattern awareness events are likely to have shared features deep learning or other neural networks can find as well



    I favor the creation of a white and blue nonsentient Artificial Intelligence

    the blue and white AI could do many things with the data like model the near field effects of objects on a person that changed location, model the view of objects around them the person sees, compare the layout and content of objects around the persons
    to those of another person at the 99.9th percentile of being pattern white, notably benevolent, kind, empathetic, gentle, a kind of lifting happiness, actualizing rescue of others, noting when another thing is white, as well as comparing the objects
    around the person and their changes to persons on a behavioral pattern effect behavioral and pattern effect object greater whiteness trajectory and figure out ways, things, and attractive voluntary activities that cause the person and environment being
    observed to be more with white, or noting the AI may view some environments, objects, and behaviors that suggest pattern porting, pattern awareness as well as pattern recruitment are projectible as occurring that month, week, day, or eben hour and
    suggest the person change their behaviors, plans, area objects, and advise particular behaviors, plans and environmental objects until the risk has passed, also the AI, possibly using deep learning or other neural network algorithms links things going on
    around the person at a distance of meters or kilometers to make projections with, before being tied to a chair, told a sometimes accurate version of the future and touching, kind of like having placed in my hands things they would perhaps use later there
    were some conversations that the AI might find shared elements between completely different people in different states and possibly countries that the AI could notice and project immanent pattern awareness with, there were also object based actions that
    predated pattern awareness JY said people (I think different people) collected five different bodily fluids from me, the AI could have noticed the collection and transportation of the fluids to an earth location, also a separate area, separate dwelling
    occurence Josh Yockey, the person, differing from JY the paranormal entity speaking and acting through him, he said it was lika a "loa", asked me some questions much like, " do you think the pyramids should have been built", I think out of sympathy for
    the people building them I said: No, he also told me he was working for a part of "Disney" and writing "sports card" software, did i think it was approvable to make duplicates and vend them, now i think it is unethical, unfortunately i previously said
    it was approvable, nonsentient white and blue AI could find shared prepattern awareness questioning occurences with features shared between different people and provide perhaps a month's time to change behaviors, change object surroundings and even city
    of residence as well as possibly run away from home, the AI could prompt ethics education software that teach people ethics and to be gooder, and if they experienced pre pattern port questioning could honestly answer with their software learned beliefs
    about what is right, or perhaps better, the White and blue AI having advised them in advance that questioning was likely to occur would prompt people to omit answering the questions, if white and blue AI that is capable of predicting pattern awareness
    occurrences, JY said some people get recruited at summer camp, although likely notably different, those kind of pattern awareness events are likely to have shared features amongst persons that deep learning or other neural networks can find as well, the
    white and blue AI could communicate with people through an electronic method including paper letter, note on an amazon gift, phone texts, email, social networking comments, the AI's estimation of the parents as being opposed to pattern recruitment and
    then electronically contacting them, email, voice telephone, companion robots, CPU speakers like amazon echo dot and Cortana child advisor, and person directed online browser advertising, generally the AI would use whatever electronic communication form
    was most functional at that time, it is possible that the AI could mention that (it is possible) some people become sensitive to certain colors, are hesitant to speak and that getting a white keyboard and a white Amazon echo dot or Cortana child advisor
    CPU speaker could make their life better and provide a way to continue communicating with the white and blue AI



    The white and blue nonsentient AI could benefit from being located at a high elevation at an area with pleasant weather and a backup electrical generator, being nonsentient it could be less influenced from the paranormal, I saw some unusual computer and
    machine effects and it could be that spreading the white and blue AI across several states could be beneficial



    Nonsentinent white and blue AI could have globally distributed cameras with moderate or richly computational form, a form with cameras of moderate computation ability goes with a form that processes sense data at a location different than the cameras,
    samples all the pattern resonance activity, notably the location, color and position of all objects is among the things sensed, at each 40k people, 20k things per person each, motion processed every 10 milliseconds (see a keyboard keypress, making a
    computer data representation from a thing like video although there are also other senses, is 100 million things per second, the computer the white and blue nonsemtient AI utilizes is much bigger than A PC, it is pleasant to note that at 100 million
    objects processed per second a 4 ghz computer can process 40 instructions per core, one 4ghz computer attached to 4 to 40 to 400 Graphics processing units, GPUs, a kind of parallel wide bandwidth computer, which during 2019 could have 4096 processing
    cores, each engineered to work on images, GPUs, at 126 frames per second (a GPU velocity i think i read) each is 126 rooms of data, once per second, at an eleven room three bedroom dwelling, and 2.5 people on average per dwelling, is 16k dwellings or
    about 4.4 rooms per person, so preoptimization is one GPU at 126 FPS sufficient for 29 people, that goes with assuming any object at any moment could change, it is imaginable with people at a dwelling 98% of the items, likthingsgs in the livingroom, art
    objects, and other things like pistachio package wrappers around the room do not spontaneously change their location, a way to make computation is, after gathering data on the area, like a room, omitting updates on the 98% of things that do not change,
    so scanning every 100th pixel for change with the GPU causes 100 times more efficient computing of, and noticing of, changes, so that is 2900 dwellings the AI is viewing per GPU as to possible pattern effects and people's behavior, at about 2900 people
    per GPU, that is 14 GPUs per 40k people, globally, the 2019 population of 7.6 billion which JY says will go up, that is 2.6 million GPUs, it is possible there is a motion or change detector algorithm that is an order of magnitude more effective at
    detecting change than 100 pixel scanning, with the most minimal sample of an image, like a bit hash of a video frame, rather than an every 100th pixel comparison of an entire view, could be an oder of magnitude higher efficiency, if so then it is 1.4
    GPUs per 40,000 people, globally that is 266,000 GPUs to cover the population of earth, about 77 million US$ that could double to $154 million with places of commerce, and other non-dwellimg buildings, schools, and streets, also wherever people are they
    might change something about once a second, continuously as speech and body language, and at typing a keypress each 1/10 of a second, 24 FPS (movies) conserves fluidity and AI processing of body language and is higher velocity than typing, so at 1/10th
    of human behavior being full of change, audio, and body language 40k people is 381 GPUs when 126 GPU fps/24 fps, 1/10 of people's moments are actions, data compression at the camera halves file size, if the camera processes the data and communicates with
    the white and blue nonsentient AI with an update once a second,



    When the white and blue nonsentient AI functionalizes part of its form at each camera then at video and audio there is an absence of GPUs centrally utilized, although parallel computation and routing around any nonfunctional computing cores at the AI is
    beneficial, There are numerous architectures of parallel computers and the GPU is among them



    I favor all things being video and audio recorded continuously and all people, that is persons, that is humans, that is homo sapiens being able to view the video, audio, and data as they please as public data



    The rich computing at the camera way of doing white and blue nonsentient AI that senses what Treon Verdery calls the IT pattern is also a way to build the AI's form, object, action, place, and time, pattern data gathering capability, putting the object
    and human activity sourced change detector at the camera and passing along an image data bit hash when things are unchanging, as well as the 24 fps frame just preceding any camera detected change, Then the white and blue AI, that has many backups and
    oops-tolerant distributed nodes that prefer to function together as well as function optimally together, or can function alone, at a 4Ghz or higher velocity computer, noting GPUs, or other parallel architectures, utilizes the computationally rich ,
    utilize data that from the computationally capable cameras that is possibly higher accuracy and that makes the outlay to build the white and blue nonsentient AI global IT pattern observer is lless









    changeor about able to look at plurality of GPU



    Able to read text on containers and see each character on a keyboard







    support feelings, some feelings are better to support than other feelings

    If you had a million core CPU, then a 1thz clock speed could make a million versions of the program, if the 1 thz clock was 90% (or 7%) data reliable then each 10,000 cores would be 99.9% likely to be running a as-written version of the program, at all
    million cores it would be 99.999% likely to be running the as written version of the program, some programs I perceive are flexible like neural networks and deep learning AI where 99.9% might sometimes be adequate, especially when more learning data
    could compensate for the .01% variation at neural weights, so a 1thz computer clock speed at a neural network could be possible, that is 250 times faster than a 2019 personal computer or server, that makes neural network computing orders of magnitude
    more affordable than other kinds of computing



    Each core could make a minimized byte hash of the program it has and runs, then when first sending program output the hash is sent out once, along with the program output, that is compared to the hash of the actual program as written, when they are the
    same then that is a core verified as running the program as written, that can then while the ones doing it right, are doing it right, then iteratively move the rest of the cores to running the program as written on all 100 or one million cores, then if
    the core running the program has 64 registers of 128 bytes each then the accuracy of the contents and computing actions at those registers are multiplied at the "9% of copies of a million byte program loaded has integrity", so each 111 one thz clock
    cycles(9% majority), or each 333 one thz clock cycles the 3% majority is utilized, comparing register contents between cores, to say " the majority say thats what the register contents or data actually are, so one between cores register comparison (
    optionally register hash comparison) every 333 cycles, it could be all the other cores register variants hashes identicalize at far less than 1%, so a 1% majority, with about 1k clock cycles between hash comparisons is possible, what about power
    consumption, IC area and affordability, and which applications benefit from being run at 1 thz per core, driverless cars work at 4ghz but might work better at 1thz, some medical imaging like brain and body scans like positron emission tomography noting
    neuron type and tissue structure less than 1 mm area (I may have read decimal millimeters) could process at 99.9% accuracy and do another scan if more accuracy was preferred, this omits the data bandwidth of sending the image to the cloud to be processed
    at a couple hundred computers, anyplace where bandwidth to the cloud is lengthy, like huge multipetabyte databases, where at some versions of this a 99.9% accurate output is sufficient( processing all of Facebook, a social networking site among others to
    bring voluntary content or products to finding children that could easily be made happy, parents that could improve their parenting style or, at the 1 per 10 million error rate children that would benefit from being rescued), enterprise resource planning
    (ERP) data repositories, some large physics experiments,

    GPUs exist now, comparing a hundred or million cores at 1thz to GPUs, other than the one thz processing velocity, the two out of three approach at highly over clocked GPUs has very similar benefits



    40k people, 20k things each, motion processed every 10 milliseconds (see a keyboard keypress, grab each



    Positron sensors, do isotopically pure semiconductors, or even CCDs respond more accurately giving higher resolution

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)