• Nonsentinent white and blue AI could have globally distributed cameras

    From Treon Verdery@21:1/5 to All on Wed Sep 7 09:39:52 2022
    a form with cameras of moderate computation ability goes with a form that processes sense data at a location different than the cameras, samples all the pattern resonance activity, notably the location, color and position of all objects is among the
    things sensed, at each 40k people, 20k things per person each, motion processed every 10 milliseconds (see a keyboard keypress, making a computer data representation from a thing like video although there are also other senses, is 100 million things per
    second, the computer the white and blue nonsemtient AI utilizes is much bigger than A PC, it is pleasant to note that at 100 million objects processed per second a 4 ghz computer can process 40 instructions per core, one 4ghz computer attached to 4 to 40
    to 400 Graphics processing units, GPUs, a kind of parallel wide bandwidth computer, which during 2019 could have 4096 processing cores, each engineered to work on images, GPUs, at 126 frames per second (a GPU velocity i think i read) each is 126 rooms
    of data, once per second, at an eleven room three bedroom dwelling, and 2.5 people on average per dwelling, is 16k dwellings or about 4.4 rooms per person, so preoptimization is one GPU at 126 FPS sufficient for 29 people, that goes with assuming any
    object at any moment could change, it is imaginable with people at a dwelling 98% of the items, likthingsgs in the livingroom, art objects, and other things like pistachio package wrappers around the room do not spontaneously change their location, a way
    to make computation is, after gathering data on the area, like a room, omitting updates on the 98% of things that do not change, so scanning every 100th pixel for change with the GPU causes 100 times more efficient computing of, and noticing of, changes,
    so that is 2900 dwellings the AI is viewing per GPU as to possible pattern effects and people's behavior, at about 2900 people per GPU, that is 14 GPUs per 40k people, globally, the 2019 population of 7.6 billion which JY says will go up, that is 2.6
    million GPUs, it is possible there is a motion or change detector algorithm that is an order of magnitude more effective at detecting change than 100 pixel scanning, with the most minimal sample of an image, like a bit hash of a video frame, rather than
    an every 100th pixel comparison of an entire view, could be an oder of magnitude higher efficiency, if so then it is 1.4 GPUs per 40,000 people, globally that is 266,000 GPUs to cover the population of earth, about 77 million US$ that could double to $
    154 million with places of commerce, and other non-dwellimg buildings, schools, and streets, also wherever people are they might change something about once a second, continuously as speech and body language, and at typing a keypress each 1/10 of a
    second, 24 FPS (movies) conserves fluidity and AI processing of body language and is higher velocity than typing, so at 1/10th of human behavior being full of change, audio, and body language 40k people is 381 GPUs when 126 GPU fps/24 fps, 1/10 of
    people's moments are actions, data compression at the camera halves file size, if the camera processes the data and communicates with the white and blue nonsentient AI with an update once a second,

    When the white and blue nonsentient AI functionalizes part of its form at each camera then at video and audio there is an absence of GPUs centrally utilized, although parallel computation and routing around any nonfunctional computing cores at the AI is
    beneficial, There are numerous architectures of parallel computers and the GPU is among them

    I favor all things being video and audio recorded continuously and all people, that is persons, that is humans, that is homo sapiens being able to view the video, audio, and data as they please as public data

    The rich computing at the camera way of doing white and blue nonsentient AI that senses what Treon Verdery calls the IT pattern is also a way to build the AI's form, object, action, place, and time, pattern data gathering capability, putting the object
    and human activity sourced change detector at the camera and passing along an image data bit hash when things are unchanging, as well as the 24 fps frame just preceding any camera detected change, Then the white and blue AI, that has many backups and
    oops-tolerant distributed nodes that prefer to function together as well as function optimally together, or can function alone, at a 4Ghz or higher velocity computer, noting GPUs, or other parallel architectures, utilizes the computationally rich ,
    utilize data that from the computationally capable cameras that is possibly higher accuracy and that makes the outlay to build the white and blue nonsentient AI global IT pattern observer is lless




    changeor about able to look at plurality of GPU

    Able to read text on containers and see each character on a keyboard



    support feelings

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)