Hello,
More of my philosophy about my kind of personality and about my
new software projects..
I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..
I think i am really smart, and i am quickly implementing interesting
software projects using artificial intelligence for mathematics and
operational research, but my other software project is that i am also
solving an important problem in operational research that uses
mathematics and it is that i have just read the following book (and of
other books like it) of a known PhD researcher about operational
research and capacity planning, here it is:
Performance by Design: Computer Capacity Planning by Example
https://www.amazon.ca/Performance-Design-Computer-Capacity-Planning/dp/0130906735
So i have just found that there methodologies for the E-Business service
of this known PhD researchers don't work, because they are doing
calculations for a given arrival rate that is statistically and
empirically measured from the behavior of customers, but i think that it
is not correct, so i am being inventive and i have come with my new
methodology that fixes the arrival rate from the data by using an hyperexponential service distribution(and it is mathematical) and it is
also good for Denial-of-Service (DoS) attacks and i will write a book
about it that will teach my new methodology and i will also explain the mathematics behind it and of course i will take care of the QoS, and my
new methodology will work for cloud computing and for computer servers.
But i am not only implementing my software projects below using
mathematics and using artificial intelligence and using operational
research , but i am also an inventor of many scalable algorithms and
algorithms and here is one of my scalable algorithm that i have invented
so that you notice that i am telling the truth:
More about my powerful invention of scalable reference counting algorithm..
I invite you to read the following web page:
Why is memory reclamation so important?
https://concurrencyfreaks.blogspot.com/search?q=resilience+and+urcu
Notice that it is saying the following about RCU:
"Reason number 4, resilience
Another reason to go with lock-free/wait-free data structures is because
they are resilient to failures. On a shared memory system with multiples processes accessing the same data structure, even if one of the
processes dies, the others will be able to progress in their work. This
is the true gem of lock-free data structures: progress in the presence
of failure. Blocking data structures (typically) do not have this
property (there are exceptions though). If we add a blocking memory
reclamation (like URCU) to a lock-free/wait-free data structure, we are
loosing this resilience because one dead process will prevent further
memory reclamation and eventually bring down the whole system.
There goes the resilience advantage out the window."
So i think that RCU can not be used as reference counting,
since it is blocking on the writer side, so it is not resilient to
failures since it is not lock-free on the writer side.
So this is why i have invented my powerful Scalable reference counting
with efficient support for weak references that is lock-free for its
scalable reference counting, and here it is:
https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references
And my scalable reference counting algorithm is of the SCU(0,1) Class of Algorithms, so under scheduling conditions which approximate those found
in commercial hardware architectures, it becomes wait-free with a system latency of time O(sqrt(k)) and with an individual latency of
O(k*sqrt(k)), and k number of threads.
The proof is here on the following PhD paper:
https://arxiv.org/pdf/1311.3200.pdf
This paper suggests a simple solution to this problem. We show that, for
a large class of lock- free algorithms, under scheduling conditions
which approximate those found in commercial hardware architectures,
lock-free algorithms behave as if they are wait-free. In other words, programmers can keep on designing simple lock-free algorithms instead of complex wait-free ones, and in practice, they will get wait-free
progress. It says on the Analysis of the Class SCU(q, s):
"Given an algorithm in SCU(q, s) on k correct processes under a uniform stochastic scheduler, the system latency is O(q + s*sqrt(k), and the
individual latency is O(k(q + s*sqrt(k))."
More of my philosophy about my next software project and more..
I think i am really smart, and i have just talked to you about my new
software project, read below about it, and my next software project is
that i will implement a professional software for mathematics and
operational research that is a sophisticated solver of linear and
non-linear programming problems using artificial intelligence, and of
course
i am actually thinking about how to implement the sensitivity analysis
part, and of course my software will avoid premature convergence and
it will also be much more scalable using multicores so that to search
with artificial intelligence much faster the global optimum.
More of my philosophy about artificial intelligence and about non-linear regression..
I think i am really smart, and i will talk to you more about my
interesting software project for mathematics, so my new software project
uses artificial intelligence to implement a generalized way with
artificial intelligence using the software that permit to solve the
non-linear "multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart
algorithm using artificial intelligence that permits to avoid premature convergence, and it is also one of the most important thing, and also
it will be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am
doing it this way to be really professional and i will give you a
tutorial that explains my algorithms that uses artificial intelligence
so that you learn from them.
And read my previous thoughts:
More of my philosophy about non-linear regression and more..
I think i am really smart, and i have just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the simplex
algorithm to solve non-linear least squares problems, but i have also
just implemented the PSO and genetic algorithm that permit to solve
non-linear least squares problems, and i will soon implement a
generalized way with artificial intelligence using the software that
permit to solve the non-linear "multiple" regression, but i have also
noticed that in mathematics you have to take care of the variability of
the y in non-linear least squares problems so that to approximate, also
the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known as the damped least-squares (DLS) method, is
used to solve non-linear least squares problems. These minimization
problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for solving generic curve-fitting problems. The Levenberg–Marquardt
algorithm was found to be an efficient, fast and robust method which
also has a good global convergence property. For these reasons, It has
been incorporated into many good commercial packages performing
non-linear regression. But my way of implementing the non-linear
"multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many
parts of my software project,
so stay tuned !
Thank you,
Amine Moulay Ramdane.
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)