• More of my philosophy about the Precise Sleep() and about the essence o

    From Amine Moulay Ramdane@21:1/5 to All on Tue Jul 11 13:18:56 2023
    Hello,


    More of my philosophy about the Precise Sleep() and about the essence of measuring time in the computer and about the final source code version of my StopWatch and about RDTSCP and RDTSC and about the CPU frequency scaling and about the memory barriers
    and about good technicality and the deeper understanding of the StopWatch and and more about x86 and ARM processors and about solar cells and about AES 256 encryption and TSMC and about China and about the Transformers and about Toyota and about China
    and about objective truth and about the objective and about the paper about the multiple universes and about quantum world and about consciousness and about mathematics and about the universe and about mathematical probability and about the positive
    behavior and about the positive mindset and about patience and about the positive energy and about the "packaging" or "presentation" and about the ideal and about the being idealistic and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have
    just added to my new StopWatch a PreciseSleep() function that is more accurate than the Windows and Linux Sleep() function, so now i think
    it is the final source code version of my StopWatch, and i have tested it
    with older CPUs and with newer CPUs and i think it is working correctly, and i have also tested it with both Windows and Linux and i think it is working correctly, and now i will start to document it so that you know about it and so that you know how to
    use it, and now you can download the final source code version of my new updated StopWatch from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal

    and so that you know how to use it, and so that to have a deep understanding of the SoptWatch, i invite you to read my below previous thoughts:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i will talk more about the essence of measuring time in the computer, so from my understanding with my fluid
    intelligence how to implement my new StopWatch, i am discovering patterns with my fluid intelligence that
    explain more the essence of measuring time in the computer, so here they are: so you have to get the frequency of the CPU, i mean when you are measuring time , you are measuring the CPU frequency too, but in the new
    CPUs, the frequency can dynamically change, so you have two ways of doing it , so you can disable CPU frequency scaling in the bios and do
    your exact time's measurement, and you can set it again, but the second way is that you can get a decent approximation without disabling the
    CPU frequency scaling and do the benchmark timing of your code , as i am explaining it below, and of course the new CPUs today are multicores, so you have to set the CPU affinity as i will explain to you how so that to do the timing with the StopWatch,
    other than that, you can get a good microsecond accuracy and a decent nanosecond accuracy with RDTSC assembler instruction, but you can get a CPU tick accuracy with RDTSCP assembler instruction, but so that know more about them , read my below thoughts,
    other than that, i am also explaining much more deeply the implementation of a StopWatch in my below thoughts, so i invite you to read my below thoughts so that to understand my views on how to implement a good StopWatch:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have
    just updated my StopWatch to support both RDTSCP and RDTSC assembler instructions, so when the CPU is not new and it doesn't support RDTSCP , it will use RDTSC, and when it is a new CPU that supports RDTSCP , it will use it, so RDTSC is not a serializing instruction, so i have just correctly used the necessary memory
    barriers, and RDTSCP is a serializing instruction.

    So i will now document correctly my StopWatch so that you also know how to use correctly the CPU affinity, and now you can download my final version of my source code from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal

    And i invite you to read all my previous following thoughts so that to deeply understand the StopWatch:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so now i have to explain something important, so for a deep understanding of the StopWatch, you have to know
    more that the assembler instruction RDTSC is supported by the great majority of x86 and x64 CPUs, but it is not a serializing instruction , i mean that it can be subject to out-of-order execution that may affect its accuracy, so it is why i have just
    added correctly some other memory barriers, and now i think that it is working correctly, so you have to understand that there is another assembler instruction RDTSCP that is serializing instruction and is not subject to out-of-order execution, but it is
    compatible with just the new x86 and x64 CPUs, so i will support it in the very near future, but now i think you can be confident with my new updated StopWatch, and i think it is an interesting StopWatch that shows how to implement a good StopWatch from
    the low level layers. So i think you have to be smart so that to implement it correctly with the RDTSC, as i have just done it, so you can download the source code of my new StopWatch that i have just updated from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal

    And i invite you to read my previous below thoughts so that to have a deep understanding of the StopWatch:

    So i think that my new StopWatch can give a decent approximation even of you don't disable CPU frequency scaling in the bios, and here is why:

    When benchmarking a CPU under a heavy workload, it is generally expected that frequency scaling changes will be relatively small or negligible. This is because the frequency scaling mechanism typically aims to maximize performance during such scenarios.

    Under heavy load, the CPU frequency scaling algorithm often increases the CPU frequency to provide higher processing power and performance. The goal is to fully utilize the CPU's capabilities for the benchmarking workload.

    In these cases, frequency scaling changes are generally designed to be minimal to avoid introducing significant variations in performance. The CPU frequency may remain relatively stable or vary within a relatively small range during the benchmarking
    process.

    Considering these factors, when benchmarking under heavy workload conditions, the impact of frequency scaling changes on timing measurements using RDTSC is typically limited. As a result, RDTSC can provide a reasonable approximation of timing for
    benchmarking purposes.

    So then i invite you to read my following previous thoughts so that you understand my views on the StopWatch:


    I have just updated my new StopWatch, and it now also includes the correct memory barriers for previous 32 bit Delphi versions like Delphi 7 ,
    and you can download it from the just below web link, and i invite you to read my below previous thoughts so that to understand my views about the StopWatch:

    So i have just updated my new StopWatch, so the first problem is:

    - Instruction reordering: The rdtsc instruction itself is not a serializing instruction, which means that it does not necessarily prevent instruction reordering. In certain cases, the CPU may reorder instructions, leading to inaccuracies in timing
    measurements.

    So i have just used memory barriers so that to solve the above problem.

    And here is the second problem:

    - CPU frequency scaling: Modern CPUs often have dynamic frequency scaling, where the CPU frequency can change based on factors such as power management and workload. This can result in variations in the time measurement based on the CPU's operating
    frequency.

    So you have to disable CPU frequency scaling in the bios so that to solve the above problem , and after that make your timing with my StopWatch.

    And for the following third problem:

    - Multicore/Threaded environments: If your system has multiple cores or threads, using rdtsc may not provide synchronized timing across different cores or threads. This can lead to inconsistent and unreliable timing measurements.

    You can set the CPU affinity so that to solve the third problem.

    So i will document more my StopWatch so that to learn you how to use it,
    so stay tuned !

    And now i have just updated my new StopWatch with the necessary memory barriers, and now you can be confident with my new updated StopWatch.

    So now my new updated StopWatch uses memory barriers correctly, and it avoids the overflow problem of the Time Stamp Counter (TSC) , and it supports microseconds and nanoseconds and CPU clocks timing, and it is object oriented, and i have just made it
    support both x86 32 bit and x64 64 bit CPUs and it supports both Delphi and Freepascal compilers and it works in both Windows and Linux, so what is good about my new StopWatch is that it shows how you implement it from the low level layers in assembler
    etc., so i invite you to look at the new updated version of my source code that you can download from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal


    Other than that, read my below previous thoughts so that to understand my views:

    So now we have to attain a "deep" understanding of the StopWatch,
    so i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so as you are noticing that i am, with my fluid intelligence, understanding deeply the StopWatch, so i have just
    discovered that the following StopWatch: https://www.davdata.nl/math/timer.html , from the following engineer from Amsterdam: https://www.davdata.nl/math/about.html , is not working correctly: since he is calling the function GetTickCount() in the
    constructor, but there is a problem and a bug, since when the tick count value in milliseconds returned by GetTickCount() reaches its maximum value that is high(dword) , it will wrap around to zero and start counting up again. This is because the tick
    count is typically stored in a fixed-size data type that has a maximum value, so it is why his way of timing in milliseconds in the constructor that he is using is not working, since it is not safe, so even if this StopWatch of this engineer from
    Amsterdam does effectively avoid the overflow problem of the Time Stamp Counter (TSC), since he is using an int64 in 32 bit x86 architecture in the Intel assembler function of getCPUticks() that i am understanding, and this int64 can, from my
    calculations, go up to 29318.9829 years , so i think his StopWatch is not working for the reason i am giving just above, and second problem is the accuracy of the timing obtained from the code he provided using rdtsc instruction in assembler is dependent
    on various factors, including the hardware and software environment. However, it's important to note that directly using rdtsc for timing purposes may not provide the desired accuracy due to several reasons:

    - CPU frequency scaling: Modern CPUs often have dynamic frequency scaling, where the CPU frequency can change based on factors such as power management and workload. This can result in variations in the time measurement based on the CPU's operating
    frequency.

    - Instruction reordering: The rdtsc instruction itself is not a serializing instruction, which means that it does not necessarily prevent instruction reordering. In certain cases, the CPU may reorder instructions, leading to inaccuracies in timing
    measurements.

    - Multicore/Threaded environments: If your system has multiple cores or threads, using rdtsc may not provide synchronized timing across different cores or threads. This can lead to inconsistent and unreliable timing measurements.

    So I have just thought more and i think i will not support ARM in my new StopWatch, since ARM processors don't support like a Time Stamp Counter (TSC) in x86 processors that is compatible with previous 32 bit and 64 bit CPUs , so ARM has many important
    weaknesses , so the first important weakness is the following:

    There is no single generic method that can be universally applied to all Arm processors for measuring time in CPU clocks. The available timing mechanisms and registers can vary significantly across different Arm processor architectures, models, and
    specific implementations.

    In general, Arm processors provide various timer peripherals or system registers that can be used for timing purposes. However, the specific names, addresses, and functionalities of these timers can differ between different processors.

    To accurately measure time in CPU clocks on a specific Arm processor, you would need to consult the processor's documentation or technical reference manual. These resources provide detailed information about the available timers, their registers, and how
    to access and utilize them for timing purposes.

    It's worth noting that some Arm processors may provide performance monitoring counters (PMCs) that can be used for fine-grained timing measurements. However, the availability and usage of PMCs can also vary depending on the specific processor model.

    Therefore, to achieve accurate and reliable timing measurements in CPU clocks on a particular Arm processor, it's crucial to refer to the documentation and resources provided by the processor manufacturer for the specific processor model you are
    targeting.

    And the other weaknesses of ARM processors are the following:

    I have just looked at the following articles about Rosetta 2 and the benchmarks of Apple Silicon M1 Emulating x86:

    https://www.computerworld.com/article/3597949/everything-you-need-to-know-about-rosetta-2-on-apple-silicon-macs.html

    and read also here:

    https://www.macrumors.com/2020/11/15/m1-chip-emulating-x86-benchmark/

    But i think that the problem with Apple Silicon M1 and the next Apple SiliconM2 is that Rosetta 2 only lets you run x86–64 macOS apps. That would be apps that were built for macOS (not Windows) and aren't 32-bit. The macOS restriction eliminates huge
    numbers of Windows apps, and 64-bit restriction eliminates even more.

    Also read the following:

    Apple says new M2 chip won’t beat Intel’s finest

    Read more here:

    https://www.pcworld.com/article/782139/apple-m2-chip-wont-beat-intels-finest.html


    And here is what i am saying on my following thoughts about technology about Arm Vs. X86:

    More of my philosophy about the Apple Silicon and about Arm Vs. X86 and more of my thoughts..

    I invite you to read carefully the following interesting article so
    that to understand more:

    Overhyped Apple Silicon: Arm Vs. X86 Is Irrelevant

    https://seekingalpha.com/article/4447703-overhyped-apple-silicon-arm-vs-x86-is-irrelevant


    More of my philosophy about code compression of RISC-V and ARM and more of my thoughts..

    I think i am highly smart, and i have just read the following paper
    that says that RISC-V Compressed programs are 25% smaller than RISC-V programs, fetch 25% fewer instruction bits than RISC-V programs, and incur fewer instruction cache misses. Its code size is competitive with other compressed RISCs. RVC is expected to
    improve the performance and energy per operation of RISC-V.

    Read more here to notice it:

    https://people.eecs.berkeley.edu/~krste/papers/waterman-ms.pdf


    So i think RVC has the same compression as ARM Thumb-2, so i think
    that i was correct in my previous thoughts , read them below,
    so i think we have now to look if the x86 or x64 are still more cache friendly even with Thumb-2 compression or RVC.

    More of my philosophy of who will be the winner, x86 or x64 or ARM and more of my thoughts..

    I think i am highly smart, and i think that since x86 or x64 has complex instructions and ARM has simple instructions, so i think that x86 or x64 is more cache friendly, but ARM has wanted to solve the problem by compressing the code by using Thumb-2
    that compresses the code, so i think Thumb-2 compresses the size of the code by around 25%, so i think
    we have to look if the x86 or x64 are still more cache friendly even with Thumb-2 compression, and i think that x86 or x64 will still optimize more the power or energy efficiency, so i think that there remains that since x86 or x64 has other big
    advantages, like the advantage that i am talking about below, so i think the x86 or x64 will be still successful big players in the future, so i think it will be the "tendency". So i think that x86 and x64 will be good for a long time to make money in
    business, and they will be good for business for USA that make the AMD or Intel CPUs.


    More of my philosophy about x86 or x64 and ARM architectures and more of my thoughts..

    I think i am highly smart, and i think that x86 or x64 architectures
    has another big advantage over ARM architecture, and it is the following:


    "The Bright Parts of x86

    Backward Compatibility

    Compatibility is a two-edged sword. One reason that ARM does better in low-power contexts is that its simpler decoder doesn't have to be compatible with large accumulations of legacy cruft. The downside is that ARM operating systems need to be modified
    for every new chip version.

    In contrast, the latest 64-bit chips from AMD and Intel are still able to boot PC DOS, the 16-bit operating system that came with the original IBM PC. Other hardware in the system might not be supported, but the CPUs have retained backward compatibility
    with every version since 1978.

    Many of the bad things about x86 are due to this backward compatibility, but it's worth remembering the benefit that we've had as a result: New PCs have always been able to run old software."

    Read more here on the following web link so that to notice it:

    https://www.informit.com/articles/article.aspx?p=1676714&seqNum=6


    So i think that you can not compare x86 or x64 to ARM, since it is
    not just a power efficiency comparison, like some are doing it by comparing
    the Apple M1 Pro ARM CPU to x86 or x64 CPUs, it is why i think that x86 or x64 architectures will be here for a long time, so i think that they will be good for a long time to make money in business, and they are a good business for USA that make the AMD
    or Intel CPUs.

    More of my philosophy about weak memory model and ARM and more of my thoughts..


    I think ARM hardware memory model is not good, since it is a
    weak memory model, so ARM has to provide us with a TSO memory
    model that is compatible with x86 TSO memory model, and read what Kent Dickey is saying about it in my following writing:


    ProValid, LLC was formed in 2003 to provide hardware design and verification consulting services.

    Kent Dickey, founder and President, has had 20 years experience in hardware design and verification. Kent worked at Hewlett-Packard and Intel Corporation, leading teams in ASIC chip design and pre-silicon and post-silicon hardware verification. He
    architected bus interface chips for high-end servers at both companies. Kent has received more than 10 patents for innovative work in both design and verification.

    Read more here about him:

    https://www.provalid.com/about/about.html


    And read the following thoughts of Kent Dickey about the weak memory model such as of ARM:

    "First, the academic literature on ordering models is terrible. My eyes
    glaze over and it's just so boring.

    I'm going to guess "niev" means naive. I find that surprising since x86
    is basically TSO. TSO is a good idea. I think weakly ordered CPUs are a
    bad idea.

    TSO is just a handy name for the Sparc and x86 effective ordering for
    writeback cacheable memory: loads are ordered, and stores are buffered and will complete in order but drain separately from the main CPU pipeline. TSO can allow loads to hit stores in the buffer and see the new value, this doesn't really matter for
    general ordering purposes.

    TSO lets you write basic producer/consumer code with no barriers. In fact, about the only type of code that doesn't just work with no barriers on TSO is Lamport's Bakery Algorithm since it relies on "if I write a location and read it back and it's still
    there, other CPUs must see that value as well", which isn't true for TSO.

    Lock free programming "just works" with TSO or stronger ordering guarantees, and it's extremely difficult to automate putting in barriers for complex algorithms for weakly ordered systems. So code for weakly ordered systems tend to either toss in lots of
    barriers, or use explicit locks (with barriers). And extremely weakly ordered systems are very hard to reason about, and especially hard to program since many implementations are not as weakly ordered as the specification says they could be, so just
    running your code and having it work is insufficient. Alpha was terrible in this regard, and I'm glad it's silliness died with it.

    HP PA-RISC was documented as weakly ordered, but all implementations
    guaranteed full system sequential consistency (and it was tested in and enforced, but not including things like cache flushing, which did need barriers). No one wanted to risk breaking software from the original in-order fully sequential machines that might have relied on it. It wasn't really a performance issue, especially once OoO was added.

    Weakly ordered CPUs are a bad idea in much the same way in-order VLIW is a bad idea. Certain niche applications might work out fine, but not for a general purpose CPU. It's better to throw some hardware at making TSO perform well, and keep the software
    simple and easy to get right.

    Kent"


    Read the rest on the following web link:

    https://groups.google.com/g/comp.arch/c/fSIpGiBhUj0




    Tandem cells using perovskites and silicon make solar power more efficient and affordable

    "Research into 'miracle material' perovskite in the past decade is now bearing fruit with more labs crossing the 30 percent barrier for solar cells. Solar is already a cost-effective method for harnessing renewable energy and is deployed across large
    parts of the planet in a bid to move away from fossil fuels."

    Read more here:

    https://interestingengineering.com/innovation/tandem-solar-cells-30-percent-energy-conversion-perovskites-silicon


    And Toyota Motor Corporation is a Japanese multinational automotive manufacturer headquartered in Toyota City, Aichi, Japan. It was founded by Kiichiro Toyoda and incorporated on August 28, 1937. Toyota is one of the largest automobile manufacturers in
    the world, producing about 10 million vehicles per year, so Toyota announces a battery with a range of 1,200 km and a recharge in 10 minutes! , and Toyota seems to have both definitively solved the problem of stability and production cost, and you can
    read about it in the following article (And you can translate the article from french to english):

    Toyota announces a battery with a range of 1,200 km and a recharge in 10 minutes!

    Read more here:

    https://www.futura-sciences.com/tech/actualites/voiture-electrique-toyota-annonce-batterie-autonomie-1-200-km-recharge-10-min-106302/


    I invite you to read the following web page from IBM that says that AES 256 encryption is safe from large quantum computers:

    https://cloud.ibm.com/docs/key-protect?topic=key-protect-quantum-safe-cryptography-tls-introduction


    And read the following so that to understand it correctly:

    And IBM set to revolutionize data security with latest quantum-safe technology

    Read more here in the following new article:

    https://interestingengineering.com/innovation/ibm-revolutionizes-data-security-with-quantum-safe-technology


    And I have also just read the following article that says the following:

    "AES-128 and RSA-2048 both provide adequate security against classical attacks, but not against quantum attacks. Doubling the AES key length to 256 results in an acceptable 128 bits of security, while increasing the RSA key by more than a factor of 7.5
    has little effect against quantum attacks."

    Read more here:

    https://techbeacon.com/security/waiting-quantum-computing-why-encryption-has-nothing-worry-about


    So i think that AES-256 encryption is acceptable encryption for quantum computers.


    And Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough, and to give you
    more proof of it, look at the following article from ComputerWorld where Lamont Wood is saying:

    "But using quantum technology with the same throughput, exhausting the possibilities of a 128-bit AES key would take about six months. If a quantum system had to crack a 256-bit key, it would take about as much time as a conventional computer needs to
    crack a 128-bit key.
    A quantum computer could crack a cipher that uses the RSA or EC algorithms almost immediately."

    Read more here on ComputerWorld:

    https://www.computerworld.com/article/2550008/the-clock-is-ticking-for-encryption.html


    And about Symmetric encryption and quantum computers..

    Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough.

    Read more here:

    Is AES-256 Quantum Resistant?

    https://medium.com/@wagslane/is-aes-256-quantum-resistant-d3f776163672


    And it is why i have implemented Parallel AES encryption with 256 bit keys in my following interesting software project called Parallel Archiver, you can read about it and download it from here:

    https://sites.google.com/site/scalable68/parallel-archiver


    TSMC: Chinese curbs on rare metal exports will not have immediate effect

    Read more here:

    https://www.tomshardware.com/news/tsmc-export-curbs-on-rare-metal-exports-will-not-have-immediate-effect


    And i invite you to read carefully about the new LongNet that scales sequence length of Transformers to 1,000,000,000 Tokens (and notice
    in my below explanation that sequence length is not the context window):

    https://huggingface.co/papers/2307.02486


    So i say that you have to understand that the sequence length primarily refers to the input length during inference or when using the model for prediction. It determines the maximum length of the prompt or input text that the model can process at once.

    During training, the context window or context size is used, which determines the length of the text that the model takes into account for predicting the next token in a sequence. The context window is typically smaller than the maximum sequence length.

    So to clarify:

    - Sequence length: Refers to the maximum length of the prompt or input text during inference or prediction.

    - Context window: Refers to the length of the preceding text that the model considers during training to predict the next token in a sequence.


    So i invite you to read in my following thoughts in the following web link about the limitations of Large Language Models such as GPT-4 etc.:


    https://groups.google.com/g/alt.culture.morocco/c/SjBB8Wd-kGI


    And OpenAI sets up a division responsible for creating an AI that will control the development of superintelligence

    Read more here (and you can translate the article from french to english):

    https://intelligence-artificielle.developpez.com/actu/346215/OpenAI-met-sur-pieds-une-division-chargee-de-creer-une-IA-qui-va-controler-le-developpement-de-la-superintelligence-supposee-etre-plus-intelligente-que-l-homme-Elle-pourrait-arriver-d-ici-2030/


    Since i am speaking about objectivity in my below previous thoughts, i will ask a philosophical question of:

    What is the objective and what is objective truth ?

    So i will now discover patterns with my fluid intelligence that answer
    the above question, and here they are , so i will start by the objective truth:

    What is objective truth and what is subjective truth ?

    So for example when we look at the the following equality: a + a = 2*a,
    so it is objective truth, since it can be made an acceptable general truth, so then i can say that objective truth is a truth that can be made an acceptable general truth, so then subjective truth is a truth that can not be made acceptable general truth,
    like saying that Jeff Bezos is the best human among humans is a subjective truth. So i can say that we are in mathematics also using the rules of logic so that to logically prove that a theorem or the like is truth or not.

    So then from the above pattern that i am discovering with my fluid intelligence , i will say that the "objective" is not the truth, but it is the objective information or analysis that is based on factual evidence, verifiable data, and logical reasoning,
    but you are noticing that the objective is the "way" to find the truth, but it is not the truth that is the goal of the the way of the objective.

    So i have to make you notice that the below paper of 2019 is in accordance with Hugh Everett III model, and when it says that the quantum experiment suggests there’s no such thing as objective reality, i think that it doesn't mean that there is no
    reality, but it means that the classical materialistic model is not the right objectivity, since the quantum world has a different way of behaving like with qubits of the register of quantum computers that can be in different states, so that doesn't mean
    that this being in different states is not objective, but i think that it means that the classical way of explaining becomes not objective, so in conclusion , i think that the below patterns that i am discovering about the multiple universes that create
    our tuned universe is still objective and i think it is correct, so i invite you to read my below thoughts so that to understand my views:.

    So the previous pattern that i have just discovered was not so true, since a new paper of year 2019 showed that there is no collapse, and i invite you to look at the following video so that to understand it:

    https://www.youtube.com/watch?v=h75DGO3GrF4


    And you can read about the new paper of year 2019 in the following MIT technology review:

    A quantum experiment suggests there’s no such thing as objective reality

    https://www.technologyreview.com/2019/03/12/136684/a-quantum-experiment-suggests-theres-no-such-thing-as-objective-reality/


    Other than that , i think the below patterns that i have just discovered are true, so i invite you to read them carefully in my following thoughts:

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so now i have just
    discovered another pattern with my fluid intelligence by looking at the following video, and i invite you to look at it:

    Is math discovered or invented? - Jeff Dekofsky

    https://www.youtube.com/watch?v=X_xR5Kes4Rs


    So the important pattern that i am discovering with my fluid intelligence in the above video is the following:


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Man@21:1/5 to All on Tue Jul 11 22:26:43 2023
    VGVyZSBow6RycmEgQW1pbmUuIEt1aWRhcyBzZWlzIG9uLCAwIHbDtWkgNS4uLi4uLi4/IEFubmFu IFRlaWxlIHZhYmEgbWlra3JpLCB2w7VpdGUgcsOkw6RraWRhIGvDtWlnZXN0IG11dXN0IGthaC4g VsO1aWQgcsOkw6RraWRhIHTDpGllbGlrdWx0IHN1dSBwdWh0YWtzLiBLw7VpZ2VzdCBhc2phbGlr dXN0IGphIGh1dml0YXZhc3QuLi4uLi4uLi4uLi4uLiAgS3V1bGFuIHN1dXJlIGh1dmlnYS4uLi4u Li4uCgoK8J+kqPCfpKjwn6So8J+kqPCfpKjwn6So8J+kqPCfpKgKCvCfmITwn5iE8J+YhPCfmITw n5iE8J+YhPCfmITwn5iE8J+YhPCfmITwn5iE8J+YhPCfmITwn5iE8J+YhPCfmIQKCvCfmYLwn5mC 8J+ZgvCfmYLwn5mC8J+ZgvCfmYLwn5mC8J+ZgvCfmYLwn5mC8J+ZgvCfmYLwn5mC8J+ZgvCfmYLw n5mC8J+ZgvCfmYLwn5mC8J+ZgvCfmYLwn5mC8J+ZgvCfmYLwn5mC8J+ZgvCfmYIK

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)