Hello..
About Java and Delphi and Freepascal..
I have just read the following webpage:
Java is not a safe language
https://lemire.me/blog/2019/03/28/java-is-not-a-safe-language/
But as you have noticed the webpage says:
- Java does not trap overflows
But Delphi and Freepascal do trap overflows.
And the webpage says:
- Java lacks null safety
But Delphi has null safety since i have just posted about it by saying
the following:
Here is MyNullable library for Delphi and FreePascal that brings null
safety..
Java lacks null safety. When a function receives an object, this object
might be null. That is, if you see ‘String s’ in your code, you often
have no way of knowing whether ‘s’ contains an actually String unless
you check at runtime. Can you guess whether programmers always check?
They do not, of course, In practice, mission-critical software does
crash without warning due to null values. We have two decades of
examples. In Swift or Kotlin, you have safe calls or optionals as part
of the language.
Here is MyNullable library for Delphi and FreePascal that brings null
safety, you can read the html file inside the zip to know how it works,
and you can download it from my website here:
https://sites.google.com/site/scalable68/null-safety-library-for-delphi-and-freepascal
And the webpage says:
- Java allows data races
But for Delphi and Freepascal i have just written about how to prevent
data races by saying the following:
Yet more precision about the invariants of a system..
I was just thinking about Petri nets , and i have studied more
Petri nets, they are useful for parallel programming, and
what i have noticed by studying them, is that there is two methods
to prove that there is no deadlock in the system, there is the
structural analysis with place invariants that you have to
mathematically find, or you can use the reachability tree, but we have
to notice that the structural analysis of Petri nets learns you more,
because it permits you to prove that there is no deadlock in the system,
and the place invariants are mathematically calculated by the following
system of the given Petri net:
Transpose(vector) * Incidence matrix = 0
So you apply the Gaussian Elimination or the Farkas algorithm to
the incidence matrix to find the Place invariants, and as you will
notice those place invariants calculations of the Petri nets look
like Markov chains in mathematics, with there vector of probabilities
and there transition matrix of probabilities, and you can, using
Markov chains mathematically calculate where the vector of probabilities
will "stabilize", and it gives you a very important information, and
you can do it by solving the following mathematical system:
Unknown vector1 of probabilities * transition matrix of probabilities =
Unknown vector1 of probabilities.
Solving this system of equations is very important in economics and
other fields, and you can notice that it is like calculating the
invariants , because the invariant in the system above is the
vector1 of probabilities that is obtained, and this invariant,
like in the invariants of the structural analysis of Petri nets,
gives you a very important information about the system, like where
market shares will stabilize that is calculated this way in economics.
About reachability analysis of a Petri net..
As you have noticed in my Petri nets tutorial example (read below),
i am analysing the liveness of the Petri net, because there is a rule
that says:
If a Petri net is live, that means that it is deadlock-free.
Because reachability analysis of a Petri net with Tina
gives you the necessary information about boundedness and liveness
of the Petri net. So if it gives you that the Petri net is "live" , so
there is no deadlock in it.
Tina and Partial order reduction techniques..
With the advancement of computer technology, highly concurrent systems
are being developed. The verification of such systems is a challenging
task, as their state space grows exponentially with the number of
processes. Partial order reduction is an effective technique to address
this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering.
Tina is using “partial-order” reduction techniques aimed at preventing combinatorial explosion, Read more here to notice it:
http://projects.laas.fr/tina/papers/qest06.pdf
About modelizations and detection of race conditions and deadlocks
in parallel programming..
I have just taken further a look at the following project in Delphi
called DelphiConcurrent by an engineer called Moualek Adlene from France:
https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas
And i have just taken a look at the following webpage of Dr Dobb's journal:
Detecting Deadlocks in C++ Using a Locks Monitor
https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644
And i think that both of them are using technics that are not as good
as analysing deadlocks with Petri Nets in parallel applications ,
for example the above two methods are only addressing locks or mutexes
or reader-writer locks , but they are not addressing semaphores
or event objects and such other synchronization objects, so they
are not good, this is why i have written a tutorial that shows my
methodology of analysing and detecting deadlocks in parallel
applications with Petri Nets, my methodology is more sophisticated
because it is a generalization and it modelizes with Petri Nets the
broader range of synchronization objects, and in my tutorial i will add
soon other synchronization objects, you have to look at it, here it is:
https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets
You have to get the powerful Tina software to run my Petri Net examples
inside my tutorial, here is the powerful Tina software:
http://projects.laas.fr/tina/
Also to detect race conditions in parallel programming you have to take
a look at the following new tutorial that uses the powerful Spin tool:
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
This is how you will get much more professional at detecting deadlocks
and race conditions in parallel programming.
And about memory safety of Delphi and Freepascal, here is what i said:
I have just read the following webpage about memory safety:
Microsoft: 70 percent of all security bugs are memory safety issues
https://www.zdnet.com/article/microsoft-70-percent-of-all-security-bugs-are-memory-safety-issues/
And it says:
"Users who often read vulnerability reports come across terms over and
over again. Terms like buffer overflow, race condition, page fault, null pointer, stack exhaustion, heap exhaustion/corruption, use after free,
or double free --all describe memory safety vulnerabilities."
So as you will notice below, that the following memory safety problems
has been solved in Delphi:
And I have just read the following webpage about "Fearless Security:
Memory safety":
https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/
Here is the memory safety problems:
1- Misusing Free (use-after-free, double free)
I have solved this in Delphi and Freepascal by inventing a "Scalable"
reference counting with efficient support for weak references. Read
below about it.
2- Uninitialized variables
This can be detected by the compilers of Delphi and Freepascal.
3- Dereferencing Null pointers
I have solved this in Delphi and Freepascal by inventing a "Scalable"
reference counting with efficient support for weak references. Read
below about it.
4- Buffer overflow and underflow
This has been solved in Delphi by using madExcept, read here about it:
http://help.madshi.net/DebugMm.htm
You can buy it from here:
http://www.madshi.net/
There remains also the stack exhaustion memory safety problem,
and here is how to detect it in Delphi:
Call the function "DoStackOverflow" below once from your code and you'll
get the EStackOverflow error raised by Delphi with the message "stack overflow", and you can print the line of the source code where
EStackOverflow is raised with JCLDebug and such:
----
function DoStackOverflow : integer;
begin
result := 1 + DoStackOverflow;
end;
---
About my scalable algorithms inventions..
I am a white arab, and i am a gentleman type of person,
and i think that you know me too by my poetry that i wrote
in front of you and that i posted here, but i am
also a more serious computer developer, and i am also
an inventor who has invented many scalable algorithms, read about
them on my writing below:
Here is my last scalable algorithm invention, read
what i have just responded in comp.programming.threads:
About my LRU scalable algorithm..
On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote:
Amine, a quest for you:
Database-servers and operating-system-kernels mostly use LRU as
the scheme to evict old buffers from their cache. One issue with
LRU is, that an LRU-structure can't be updated by multiple threads simultaneously. So you have to have a global lock.
I managed to write a LRU-caching-class that can update the links
in the LRU-list to put the most recent fetched block to the h