Hello,
More of my philosophy about the new Zen 4 AMD Ryzen™ 9 7950X and more of my thoughts..
I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..
I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, so i have just looked at the new Zen 4 AMD Ryzen™ 9 7950X CPU, and i invite you to look at it here:System Solver Library that scales very well is scaling around 8X on my 16 cores Intel Xeon with 2 NUMA nodes and with 8 memory channels, but it will not scale correctly on the
https://www.amd.com/en/products/cpu/amd-ryzen-9-7950x
But notice carefully that the problem is with the number of supported memory channels, since it just support two memory channels, so it is not good, since for example my following Open source software project of Parallel C++ Conjugate Gradient Linear
new Zen 4 AMD Ryzen™ 9 7950X CPU with just 2 memory channels since it is also memory-bound, and here is my Powerful Open source software project of Parallel C++ Conjugate Gradient Linear System Solver Library that scales very well and i invite you totake carefully a look at it:
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-libraryweb testing, the problem is
So i advice you to buy an an AMD Epyc CPU or an Intel Xeon CPU that supports 8 channels.
And i invite you to read my thoughts about technology here:
https://groups.google.com/g/soc.culture.usa/c/N_UxX3OECX4
More of my philosophy about the problem with capacity planning of a website and more of my thoughts..
I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i have just invented a new methodology
that simplifies a lot capacity planning of a website that can be of a three-tier architecture with the web servers and with the applications servers and with the database servers, but i have to explain more so that you understand the big problem with capacity planning of a website, so when you want to for example to use
how to choose for example the correct distribution of the read and write and delete transactions on the database of a website ? so if it is not
realistic you can go beyond the knee of the curve and get a not acceptable waiting time, and the Mean value analysis (MVA) algorithm has
the same problem, so how to solve the problem ? so as you are noticing
it is why i have come with my new methodology that uses mathematics that solves the problem. And read my previous thoughts:
More of my philosophy about website capacity planning and about Quality of service and more of my thoughts..connections that we allow to the website so that to not go beyond the knee of the curve, and of course i will soon show you my mathematical calculations of my new methodology of how to do capacity planning of a website, and of course
I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, so i think that you have to lower to a certain level the QoS (quality of service) of a website, since you have to fix the limit of the number of
you have to know that that we have to do capacity planning using
mathematics so that to know the average waiting time etc. and this
permits us to calculate the number of connections that we allow to the website.
More of my philosophy about the Mean value analysis (MVA) algorithm and more of my thoughts..
I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i have just read the following paper
about the Mean value analysis (MVA) algorithm, and i invite you to read it carefully:
https://www.cs.ucr.edu/~mart/204/MVA.pdf
But i say that i am understanding easily the above paper of Mean value analysis (MVA) algorithm, but i say that the above paper doesn't say that since you have to empirically collect the visit ratio and and the average demand of each class, so it isnot so practical, since i say that you can and you have for example to calculate the "tendency" by also for example rendering the not memoryless service of for example the database to a memoryless service, but don't worry since i will soon make you
More of my philosophy about formal methods and about Leslie Lamport and more of my thoughts..conjecture, and if you are proving things, that means using mathematics.", so then i think that Leslie Lamport the known scientist is not thinking correctly by saying so, since i think that you can also prove an algorithm by highering much more the
I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and I have just looked at the following video about the man who revolutionized computer science with math, and i invite you to look at it:
https://www.youtube.com/watch?v=rkZzg7Vowao
So i say that in mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. And Leslie Lamport the known scientist is saying in the above video the following: "An algorithm without a proof is
And I invite you to read the following new article of the known computer expert in the above video called Leslie Lamport , and that says programmers need to use math by using formal methods, and how Lamport discuss some of his work, such as the TLA+specification language (developed by Lamport over the past few decades, the TLA+ [Temporal Logic of Actions] specification language allows engineers to describe objectives of a program in a precise and mathematical way), and also cited some of the
Read more in the following article and you have to translate it from french to english:
https://www.developpez.com/actu/333640/Un-expert-en-informatique-declare-que-les-programmeurs-ont-besoin-de-plus-de-mathematiques-ajoutant-que-les-ecoles-devraient-repenser-la-facon-dont-elles-enseignent-l-informatique/
But to answer the above expert called Leslie Lamport, i invite you to carefully read in the following interesting web page about the why don't people use formal methods:
WHY DON'T PEOPLE USE FORMAL METHODS?
https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
More of my philosophy of the polynomial-time complexity of race detection and more of my thoughts..
I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have quickly understood how Rust
detects race conditions, but i think that a slew of
“partial order”-based methods have been proposed, whose
goal is to predict data races in polynomial time, but at the
cost of being incomplete and failing to detect data races in
"some" traces. These include algorithms based on the classical happens-before partial order, and those based
on newer partial orders that improve the prediction of data
races over happens-before , so i think that we have to be optimistic
since read the following web page about the Sanitizers:
https://github.com/google/sanitizers
And notice carefully the ThreadSanitizer, so read carefully
the following paper about ThreadSanitizer:
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35604.pdf
And it says in the conclusion the following:
"ThreadSanitizer uses a new algorithm; it has several modes of operation, ranging from the most conservative mode (which has few false positives but also misses real races) to a very aggressive one (which
has more false positives but detects the largest number of
real races)."
So as you are noticing since the very agressive mode doesn't detect
all the data races, so it misses a really small number of real races , so it is like a very high probability of really detecting real races ,
and i think that you can also use my below methodology of using incrementally a model from the source code and using Spin model checker so that to higher even more the probability of detecting real races.
Read my previous thoughts:from the source code that i am executing in Spin model checking so that to detect race conditions, so i invite you to take a look at the following new tutorial that uses the powerful Spin tool:
More of my philosophy about race conditions and about composability and more of my thoughts..
I say that a model is a representation of something. It captures not all attributes of the represented thing, but rather only those seeming relevant. So my way of doing in software development in Delphi and Freepascal is also that i am using a "model"
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.htmlthat to know how to install Spin model checker on windows:
So you can for example install Spin model checker so that to detect race conditions, this is how you will get much more professional at detecting deadlocks and race conditions in parallel programming. And i invite you to look at the following video so
https://www.youtube.com/watch?v=MGzmtWi4Oq0primitive observable, so this instrumentation from the source code is very good since it also permits me to port my scalable algorithms inventions by for example wrapping them in some native Windows synchronization APIs, and this instrumentation from the
More of my philosophy about race detection and concurrency and more..
I have just looked quickly at different race detectors, and i think that
the Intel Thread Checker from Intel company from "USA" is also very good since the Intel Thread Checker needs to instrument either the C++ source code or the compiled binary to make every memory reference and every standard Win32 synchronization
https://docs.microsoft.com/en-us/archive/msdn-magazine/2008/june/tools-and-techniques-to-identify-concurrency-issues
So i think that the other race detectors of other programming languages have to provide this instrumentation from the source code as Intel Thread Checker from Intel company from "USA".
More of my philosophy about Rust and about memory models and about technology and more of my thoughts..
I think i am highly smart, and i say that the new programming language that we call Rust has an important problem, since read the following interesting article that says that atomic operations that have not correct memory ordering can still cause raceconditions in safe code, this is why the suggestion made by the researchers is:
"Race detection techniques are needed for Rust, and they should focus on unsafe code and atomic operations in safe code."
Read more here:
https://www.i-programmer.info/news/98-languages/12552-is-rust-really-safe.html
More of my philosophy about programming languages about lock-based systems and more..onepage&q=race%20condition%20detection%20and%20polynomial%20complexity&f=false
I think we have to be optimistic about lock-based systems, since race conditions detection can be done in polynomial-time, and you can notice it by reading the following paper:
https://arxiv.org/pdf/1901.08857.pdf
Or by reading the following paper:
https://books.google.ca/books?id=f5BXl6nRgAkC&pg=PA421&lpg=PA421&dq=race+condition+detection+and+polynomial+complexity&source=bl&ots=IvxkORGkQ9&sig=ACfU3U2x0fDnNLHP1Cjk5bD_fdJkmjZQsQ&hl=en&sa=X&ved=2ahUKEwjKoNvg0MP0AhWioXIEHRQsDJc4ChDoAXoECAwQAw#v=
So i think we can continu to program in lock-based systems, and about composability of lock-based systems, read my following thoughts about it it:programming is the
More of my philosophy about composability and about Haskell functional language and more..
I have just read quickly the following article about composability,
so i invite you to read it carefully:
https://bartoszmilewski.com/2014/06/09/the-functional-revolution-in-c/
I am not in accordance with the above article, and i think that the above scientist is programming in Haskell functional language and it is for him the way to composability, since he says that the way of functional programming like Haskell functional
the way that allows composability in presence of concurrency, but for him lock-based systems don't allow it, but i don't agree with him, and i will give you the logical proof of it, and here it is, read what is saying an article from ACM that waswritten by both Bryan M. Cantrill and Jeff Bonwick from Sun Microsystems:
You can read about Bryan M. Cantrill here:one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.
https://en.wikipedia.org/wiki/Bryan_Cantrill
And you can read about Jeff Bonwick here:
https://en.wikipedia.org/wiki/Jeff_Bonwick
And here is what says the article about composability in the presence of concurrency of lock-based systems:
"Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:
“Locks and condition variables do not support modular programming,” reads one typically brazen claim, “building large programs by gluing together smaller programs[:] locks make this impossible.”9 The claim, of course, is incorrect. For evidence
There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns touser level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between
Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must besubsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is
no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the
Read more here:
https://queue.acm.org/detail.cfm?id=1454462
More of my philosophy about HP and about the Tandem team and more of my thoughts..
I invite you to read the following interesting article so thatmanufacturer of fault-tolerant computer systems for ATM networks, banks, stock exchanges, telephone switching centers, and other similar commercial transaction processing applications requiring maximum uptime and zero data loss:
to notice how HP was smart by also acquiring Tandem Computers, Inc.
with there "NonStop" systems and by learning from the Tandem team
that has also Extended HP NonStop to x86 Server Platform, you can read about it in my below writing and you can read about Tandem Computers here: https://en.wikipedia.org/wiki/Tandem_Computers , so notice that Tandem Computers, Inc. was the dominant
https://www.zdnet.com/article/tandem-returns-to-its-hp-roots/sheet
More of my philosophy about HP "NonStop" to x86 Server Platform fault-tolerant computer systems and more..
Now HP to Extend HP NonStop to x86 Server Platform
HP announced in 2013 plans to extend its mission-critical HP NonStop technology to x86 server architecture, providing the 24/7 availability required in an always-on, globally connected world, and increasing customer choice.
Read the following to notice it:
https://www8.hp.com/us/en/hp-news/press-release.html?id=1519347#.YHSXT-hKiM8
And today HP provides HP NonStop to x86 Server Platform, and here is
an example, read here:
https://www.hpe.com/ca/en/pdfViewer.html?docId=4aa5-7443&parentPage=/ca/en/products/servers/mission-critical-servers/integrity-nonstop-systems&resourceTitle=HPE+NonStop+X+NS7+%E2%80%93+Redefining+continuous+availability+and+scalability+for+x86+data+
So i think programming the HP NonStop for x86 is now compatible with x86 programming.
And i invite you to read my thoughts about technology here:
https://groups.google.com/g/soc.culture.usa/c/N_UxX3OECX4
More of my philosophy about stack allocation and more of my thoughts..
I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just looked at the x64 assemblerhave quickly understood the x64 assembler of it, and i invite you to look at it here:
of the C/C++ _alloca function that allocates size bytes of space from the Stack and it uses x64 assembler instructions to move RSP register and i think that it also aligns the address and it ensures that it doesn't go beyond the stack limit etc., and i
64-bit _alloca. How to use from FPC and Delphi?
https://www.atelierweb.com/64-bit-_alloca-how-to-use-from-delphi/
But i think i am smart and i say that the benefit of using a stack comes mostly from "reusability" of the stack, i mean it is done this waywhy i think that using a dynamic allocated array as a stack is also useful since it also offers those benefits of reusability of the stack and i think that dynamic allocation of the array will not be expensive, so it is why i think i will implement _
since you have for example from a thread to execute other functions or procedures and to exit from those functions of procedures and this exiting from those functions or procedures makes the memory of stack available again for "reusability", and it is
https://groups.google.com/g/alt.culture.morocco/c/JuC4jar661w
And i invite you to read my thoughts about technology here:
https://groups.google.com/g/soc.culture.usa/c/N_UxX3OECX4
Thank you,
Amine Moulay Ramdane.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 299 |
Nodes: | 16 (2 / 14) |
Uptime: | 35:11:48 |
Calls: | 6,682 |
Files: | 12,222 |
Messages: | 5,342,998 |