Yesterday I wrote a catch/3 for Dogelog runtime. Which
is quite funny since Dogelog runtime doesn't have a call
stack. There is no way to identify a stack trace.
Subsequently there is no way to identify the parent catch/3
handler. But could nevertheless do it, by recursively invoking
the interpreter like in Jekejeke Prolog, which unfortuntely
uses native stack. Maybe will find later a better solution.
I am already happy for this solution, since it should allow
garbage collection across catch/3. Which was a little tricky,
but possibly some further testing will show that it works.
I also found a little discrepancy among Prolog systems.
A famous use case of catch/throw is constraint optimization.
You can use catch/throw to return an intermediate value and
reset the constraint model. Such use cases of catch/throw are
for example found in the SWI-Prolog libraries for CLP(FD) and
CLP(B) by Markus Triska. But the problem of backtrace is not
there, since you anyway do not throw a ball of the form
error(E,_). If you do not throw a ball of this form, there is no
hole _ to fill with the backtrace. You can check the SWI-Prolog
libraries for CLP(FD) and CLP(B) by Markus Triska,
to see what ball is thrown.
Mostowski Collapse schrieb am Sonntag, 25. Juli 2021 um 11:41:22 UTC+2:
The discrepancy among Prolog systems can be tested with:
test(X) :- catch((X=1;X=2), _, true).
test2(X) :- catch((X=1;X=2,throw(3)), X, true).
test3 :- test(X), test(Y), write(X-Y),
write(' '), fail; nl.
test3 :- test2(X), test(Y), write(X-Y),
write(' '), fail; nl.
test3 :- test(X), test2(Y), write(X-Y),
write(' '), fail; nl.
test3 :- test2(X), test2(Y), write(X-Y),
write(' '), fail; nl.
It is now that Dogelog, Jekejeke, GNU-Prolog, TauProlog,
YAP and XSB deliver this result:
?- test3, fail; true.
1-1 1-2 2-1 2-2
1-1 1-2 3-1 3-2
1-1 1-3 2-1 2-3
1-1 1-3 3-1 3-3
Whereas SWI-Prolog 8.3.26 and ECLiPSe 7.0 deliver this result:
?- test3, fail; true.
1-1 1-2 2-1 2-2
1-1 1-2
ERROR: Unhandled exception: 3
I guess it has to do with the timing, when the throw ball is unified
with the second catch/3 argument. After bindings are undone or before.
If the binding is still `X=2`, then a throw ball `3` doesn't unify. Mostowski Collapse schrieb am Sonntag, 25. Juli 2021 um 11:38:19 UTC+2:
Yesterday I wrote a catch/3 for Dogelog runtime. Which
is quite funny since Dogelog runtime doesn't have a call
stack. There is no way to identify a stack trace.
Subsequently there is no way to identify the parent catch/3
handler. But could nevertheless do it, by recursively invoking
the interpreter like in Jekejeke Prolog, which unfortuntely
uses native stack. Maybe will find later a better solution.
I am already happy for this solution, since it should allow
garbage collection across catch/3. Which was a little tricky,
but possibly some further testing will show that it works.
I also found a little discrepancy among Prolog systems.
The discrepancy among Prolog systems can be tested with:
test(X) :- catch((X=1;X=2), _, true).
test2(X) :- catch((X=1;X=2,throw(3)), X, true).
test3 :- test(X), test(Y), write(X-Y),
write(' '), fail; nl.
test3 :- test2(X), test(Y), write(X-Y),
write(' '), fail; nl.
test3 :- test(X), test2(Y), write(X-Y),
write(' '), fail; nl.
test3 :- test2(X), test2(Y), write(X-Y),
write(' '), fail; nl.
It is now that Dogelog, Jekejeke, GNU-Prolog, TauProlog,
YAP and XSB deliver this result:
?- test3, fail; true.
1-1 1-2 2-1 2-2
1-1 1-2 3-1 3-2
1-1 1-3 2-1 2-3
1-1 1-3 3-1 3-3
Whereas SWI-Prolog 8.3.26 and ECLiPSe 7.0 deliver this result:
?- test3, fail; true.
1-1 1-2 2-1 2-2
1-1 1-2
ERROR: Unhandled exception: 3
I guess it has to do with the timing, when the throw ball is unified
with the second catch/3 argument. After bindings are undone or before.
If the binding is still `X=2`, then a throw ball `3` doesn't unify.
Mostowski Collapse schrieb am Sonntag, 25. Juli 2021 um 11:38:19 UTC+2:
Yesterday I wrote a catch/3 for Dogelog runtime. Which
is quite funny since Dogelog runtime doesn't have a call
stack. There is no way to identify a stack trace.
Subsequently there is no way to identify the parent catch/3
handler. But could nevertheless do it, by recursively invoking
the interpreter like in Jekejeke Prolog, which unfortuntely
uses native stack. Maybe will find later a better solution.
I am already happy for this solution, since it should allow
garbage collection across catch/3. Which was a little tricky,
but possibly some further testing will show that it works.
I also found a little discrepancy among Prolog systems.
But what is more interesting are my own systems, first only 7% overhead:
/* Mac, Jekejeke Prolog 1.5.1 */
?- time((between(1,1000,_), test1, fail; true)).
% Up 538 ms, GC 1 ms, Threads 535 ms (Current 07/27/21 00:59:27)
Yes
?- time((between(1,1000,_), test2, fail; true)).
% Up 574 ms, GC 2 ms, Threads 571 ms (Current 07/27/21 00:59:32)
Yes
And in brand new Dogelog runtime? What is the overhead there?
/* Mac, Chrome, Dogelog 0.9.2 */
:- time((between(1, 10, _), test1, fail; true)).
:- time((between(1, 10, _), test2, fail; true)).
% Wall 1528 ms, trim 0 ms
% Wall 1530 ms, trim 0 ms
Practically zero!!! LoL
Mostowski Collapse schrieb am Dienstag, 27. Juli 2021 um 00:58:15 UTC+2:
Today I came up with the slogan:
***********************************************
The homoiconicity in Prolog comes
with the cost of call/1. Its not free! ***********************************************
Lets see what we got:
test1 :- between(1,100,_), between(1,100,_).
test2 :- X = (between(1,100,_), between(1,100,_)), call(X).
I get just now, around 24% overhead for the above example:
/* Mac, SWI-Prolog 8.3.26 */
?- time((between(1,1000,_), test1, fail; true)).
% 10,102,002 inferences, 0.497 CPU in 0.497 seconds (100% CPU, 20327800 Lips)
true.
?- time((between(1,1000,_), test2, fail; true)).
% 10,102,000 inferences, 0.616 CPU in 0.616 seconds (100% CPU, 16403372 Lips)
true.
In this system I find, no difference:
/* Mac, GNU-Prolog 1.4.5 */
?- between(1,1000,_), test1, fail; true.
(310 ms) yes
?- between(1,1000,_), test2, fail; true.
(316 ms) yes
Why is that?
Today I came up with the slogan:
***********************************************
The homoiconicity in Prolog comes
with the cost of call/1. Its not free! ***********************************************
Lets see what we got:
test1 :- between(1,100,_), between(1,100,_).
test2 :- X = (between(1,100,_), between(1,100,_)), call(X).
I get just now, around 24% overhead for the above example:
/* Mac, SWI-Prolog 8.3.26 */
?- time((between(1,1000,_), test1, fail; true)).
% 10,102,002 inferences, 0.497 CPU in 0.497 seconds (100% CPU, 20327800 Lips) true.
?- time((between(1,1000,_), test2, fail; true)).
% 10,102,000 inferences, 0.616 CPU in 0.616 seconds (100% CPU, 16403372 Lips) true.
In this system I find, no difference:
/* Mac, GNU-Prolog 1.4.5 */
?- between(1,1000,_), test1, fail; true.
(310 ms) yes
?- between(1,1000,_), test2, fail; true.
(316 ms) yes
Why is that?
But otherwise Dogelog is still a little slow. I had
to define between/3 as follows:
between(L, H, X) :- L =< H, X = L.
between(L, H, X) :- L < H, Y is L+1, between(Y, H, X).
On the other hand Jekejeke Prolog and SWI-Prolog
have natively implemented between/3.
The arithmetic of Dogelog is still slow, so that
possibly this masks the overhead of call/1. The
rest of the code is so slow, that the overhead of
call/1 becomes a small fraction. Maybe in future
releases, in case we manage a faster arithmetic
or a built-in between/3, the situation might look
different. Also the Albufeira instructions might
change in the future and other corners as well.
But currently Jekejeke Prolog is like more than
300x times faster than Dogelog runtime. But Dogelog
is nevertheless enjoyable. You can try it here:
https://www.dogelog.ch/
Mostowski Collapse schrieb am Dienstag, 27. Juli 2021 um 01:02:14 UTC+2:
But what is more interesting are my own systems, first only 7% overhead:
/* Mac, Jekejeke Prolog 1.5.1 */
?- time((between(1,1000,_), test1, fail; true)).
% Up 538 ms, GC 1 ms, Threads 535 ms (Current 07/27/21 00:59:27)
Yes
?- time((between(1,1000,_), test2, fail; true)).
% Up 574 ms, GC 2 ms, Threads 571 ms (Current 07/27/21 00:59:32)
Yes
And in brand new Dogelog runtime? What is the overhead there?
/* Mac, Chrome, Dogelog 0.9.2 */
:- time((between(1, 10, _), test1, fail; true)).
:- time((between(1, 10, _), test2, fail; true)).
% Wall 1528 ms, trim 0 ms
% Wall 1530 ms, trim 0 ms
Practically zero!!! LoL
Mostowski Collapse schrieb am Dienstag, 27. Juli 2021 um 00:58:15 UTC+2:
Today I came up with the slogan:
***********************************************
The homoiconicity in Prolog comes
with the cost of call/1. Its not free! ***********************************************
Lets see what we got:
test1 :- between(1,100,_), between(1,100,_).
test2 :- X = (between(1,100,_), between(1,100,_)), call(X).
I get just now, around 24% overhead for the above example:
/* Mac, SWI-Prolog 8.3.26 */
?- time((between(1,1000,_), test1, fail; true)).
% 10,102,002 inferences, 0.497 CPU in 0.497 seconds (100% CPU, 20327800 Lips)
true.
?- time((between(1,1000,_), test2, fail; true)).
% 10,102,000 inferences, 0.616 CPU in 0.616 seconds (100% CPU, 16403372 Lips)
true.
In this system I find, no difference:
/* Mac, GNU-Prolog 1.4.5 */
?- between(1,1000,_), test1, fail; true.
(310 ms) yes
?- between(1,1000,_), test2, fail; true.
(316 ms) yes
Why is that?
Here is what the ghost buster API can do, helping to globalize an array:
?- X = a(42,69), potion(X,Y), assertz(my_data(Y)).
X = a(42, 69), Y = [object Object]
?- my_data(Y), bust(Y,Z).
Y = [object Object], Z = a(42, 69)
To work with the array arg/3 and functor/3 from the ISO core standard
is quite sufficient for read access, as this further example shows:
sum_array(A, S) :-
sum_array2(1, 0, A, S).
sum_array2(I, S, A, T) :-
functor(A, _, N),
I =< N, !,
arg(I, A, X),
H is S+X,
J is I+1,
sum_array2(J, H, A, T).
sum_array2(_, S, _, S).
?- my_data(Y), bust(Y,Z), sum_array(Z, S).
Y = [object Object], Z = a(42, 69), S = 111
For write access one might consider change_arg/3 aka nb_linkarg/3.
Mostowski Collapse schrieb am Dienstag, 3. August 2021 um 22:57:13 UTC+2:
If the Prolog system allows for reference data type,
there is an easy way to globalize arrays. This more
efficient than GNU Prologs g_array. Compound terms
play the role of arrays, and the globalization happens
simply through the wrapping into a referene data type.
Here is an example how this globalization can be done
in Dogelog runtime. It helps here that Dogelog runtime
has anyway all terms on the heap, otherwise it becomes
more diffcult to realize:
function Ghost(value) {
this.value = value;
}
function potion(term) {
return new Ghost(term);
}
function bust(term) {
return term.value;
}
register("potion", 2, potion, FFI_FUNC);
register("bust", 2, bust, FFI_FUNC);
If the Prolog system allows for reference data type,
there is an easy way to globalize arrays. This more
efficient than GNU Prologs g_array. Compound terms
play the role of arrays, and the globalization happens
simply through the wrapping into a referene data type.
Here is an example how this globalization can be done
in Dogelog runtime. It helps here that Dogelog runtime
has anyway all terms on the heap, otherwise it becomes
more diffcult to realize:
function Ghost(value) {
this.value = value;
}
function potion(term) {
return new Ghost(term);
}
function bust(term) {
return term.value;
}
register("potion", 2, potion, FFI_FUNC);
register("bust", 2, bust, FFI_FUNC);
What would be cute, if we could have two operators for
potion/2 and bust/2. Like for example these operators
adopted from C programming language.
- &/1: Addressof operator
X = &Y would be the same as potion(Y, X).
- */1: Indirection operator
X = *Y would be the same as bust(Y,X)
But now I am getting carried away, and start introducing
term rewriting like the function on dicts dot operator
in SWI-Prolog.
Better stop it now!
Feel free to try Dogelog runtime life with the
array globalization example here:
Dogelog Array
https://www.dogelog.ch/array.html
Mostowski Collapse schrieb am Dienstag, 3. August 2021 um 22:58:30 UTC+2:
Here is what the ghost buster API can do, helping to globalize an array:
?- X = a(42,69), potion(X,Y), assertz(my_data(Y)).
X = a(42, 69), Y = [object Object]
?- my_data(Y), bust(Y,Z).
Y = [object Object], Z = a(42, 69)
To work with the array arg/3 and functor/3 from the ISO core standard
is quite sufficient for read access, as this further example shows:
sum_array(A, S) :-
sum_array2(1, 0, A, S).
sum_array2(I, S, A, T) :-
functor(A, _, N),
I =< N, !,
arg(I, A, X),
H is S+X,
J is I+1,
sum_array2(J, H, A, T).
sum_array2(_, S, _, S).
?- my_data(Y), bust(Y,Z), sum_array(Z, S).
Y = [object Object], Z = a(42, 69), S = 111
For write access one might consider change_arg/3 aka nb_linkarg/3. Mostowski Collapse schrieb am Dienstag, 3. August 2021 um 22:57:13 UTC+2:
If the Prolog system allows for reference data type,
there is an easy way to globalize arrays. This more
efficient than GNU Prologs g_array. Compound terms
play the role of arrays, and the globalization happens
simply through the wrapping into a referene data type.
Here is an example how this globalization can be done
in Dogelog runtime. It helps here that Dogelog runtime
has anyway all terms on the heap, otherwise it becomes
more diffcult to realize:
function Ghost(value) {
this.value = value;
}
function potion(term) {
return new Ghost(term);
}
function bust(term) {
return term.value;
}
register("potion", 2, potion, FFI_FUNC);
register("bust", 2, bust, FFI_FUNC);
Because some revival of SWI-Prolog discoutse "Errors considered
harmful" spent some time thinking whether Dogelog should do its
arithmetic differently. Like provide some new innovation.
But I don't see the need. I made a little experiment. This works
on my side, the O’Keefe example. I don’t get any error in the
top-level when doing:
?- [user].
plus(X, _, _) :- \+ number(X), !, fail.
plus(_, Y, _) :- \+ number(Y), !, fail.
plus(X, Y, Z) :- Z is X+Y.
I then get what the SWI-Prolog discutant wants:
?- plus(a, 3, X).
false
?- plus(2, 3, X).
X = 5
I don’t know whether the above behaviour is a new bug or
feature. But somehow SWI-Prolog 8.3.26 does let me override
the built-in plus/3 silently. If I start SWI-Prolog 8.3.26 freshly
and do not add the 3 clauses I get:
?- plus(a, 3, X).
ERROR: Type error: `integer' expected, found `a' (an atom)
Didn’t try yet, whether its possible to put O’Keefe plus/3 into a
module and import it.
If SWI-Prolog would want to be compatible with O’Keefes 1984
proposal, it would need to provide more than only succ/2 and
plus/3. The proposal has the following list of predicates,
that are all supposed to be able to fail on non-number arguments:
lt/2: Comparison, Less Than
le/2: Comparison, Less Than or Equal
gt/2: Comparsion, Greater Than
ge/2: Comparison, Greater Then or Equal
times/3: Multiplication
divide/4: Division with Remainder http://eclipseclp.org/reports/okeefe84.html
But SWI-Prolog implements none of these. And the ISO core standard
does also not determine some of the above predicates, so they
are free for private use:
?- divide(100,30,X,Y).
ERROR: Unknown procedure: divide/4 (DWIM could not correct goal)
?- lt(X,30).
ERROR: Unknown procedure: lt/2 (DWIM could not correct goal)
?- times(10,30,X).
ERROR: Unknown procedure: times/3 (DWIM could not correct goal)
Mostowski Collapse schrieb am Sonntag, 8. August 2021 um 17:28:17 UTC+2:
Because some revival of SWI-Prolog discoutse "Errors considered
harmful" spent some time thinking whether Dogelog should do its
arithmetic differently. Like provide some new innovation.
But I don't see the need. I made a little experiment. This works
on my side, the O’Keefe example. I don’t get any error in the top-level when doing:
?- [user].
plus(X, _, _) :- \+ number(X), !, fail.
plus(_, Y, _) :- \+ number(Y), !, fail.
plus(X, Y, Z) :- Z is X+Y.
I then get what the SWI-Prolog discutant wants:
?- plus(a, 3, X).
false
?- plus(2, 3, X).
X = 5
I don’t know whether the above behaviour is a new bug or
feature. But somehow SWI-Prolog 8.3.26 does let me override
the built-in plus/3 silently. If I start SWI-Prolog 8.3.26 freshly
and do not add the 3 clauses I get:
?- plus(a, 3, X).
ERROR: Type error: `integer' expected, found `a' (an atom)
Didn’t try yet, whether its possible to put O’Keefe plus/3 into a module and import it.
There are rumors, that Forth was prototyped in Prolog,
similar like Erlang was born. In some astronomic observatory
in Argentina they found this fragment:
forth([Op|Rest]) --> word(Op), forth(Rest).
forth([]) --> [].
word(+), [Result] --> [Number1, Number2], {Result is Number1+Number2}. word(*), [Result] --> [Number1, Number2], {Result is Number1*Number2}. word(Number), [Number] --> {number(Number)}.
?- forth([5,3,+,7,2,+,*],[],X).
X = [72] .
The Dogelog runtime progressed over the last weeks. The milestones
were ensure_loaded/1 for ISO core standard conforming Prolog texts,
and unattended quary answerer and recently first argument indexing.
Despite all this progress we have the feeling that the real potential
of the Dogelog runtime has not yet been demonstrated. Namely that
the cross compiler opens the door to many new platforms and not
only the JavaScript platform. So here are some first beginnings:
Python Version of Dogelog Runtime machine https://twitter.com/dogelogch/status/1426528779707502592
Python Version of Dogelog Runtime machine https://www.facebook.com/groups/dogelog
Yesterday we went into a little programming binge, despite there
was a free parade in Zurich. We could now already implement a transpiler
that targets Python. We simply took the transpiler main.p that targets
JavaScript and moded it into a new transpiler mainpy.p that targets
Python. The code is already on GitHub and we present it also here
as the Python code mainpy.p. We were also busy
on machine.py and special.py. The progress is now:
+------------+ cross +-------------+
| loader.p | compile | loader.py | 100%
| compiler.p | -----------> | compiler.py | 100%
+------------+ +-------------+
| machine.py | 66%
| special.py | 33%
+-------------+
See also:
Python Version of Dogelog Runtime special https://twitter.com/dogelogch/status/1426884473988292617
Python Version of Dogelog Runtime special https://www.facebook.com/groups/dogelog
Mostowski Collapse schrieb am Samstag, 14. August 2021 um 15:03:24 UTC+2:
The Dogelog runtime progressed over the last weeks. The milestones
were ensure_loaded/1 for ISO core standard conforming Prolog texts,
and unattended quary answerer and recently first argument indexing.
Despite all this progress we have the feeling that the real potential
of the Dogelog runtime has not yet been demonstrated. Namely that
the cross compiler opens the door to many new platforms and not
only the JavaScript platform. So here are some first beginnings:
Python Version of Dogelog Runtime machine https://twitter.com/dogelogch/status/1426528779707502592
Python Version of Dogelog Runtime machine https://www.facebook.com/groups/dogelog
python.exe toplevel.pyDogelog Runtime, Prolog to the Moon, 0.9.3
We finally sat down and implemented eval_term(). The new
eval_term() routine makes our Prolog based implementation of is/2
obsolete. The old realization did use (=..)/2 in two places
and call/1. The new implementation bypasses Prolog and
directly uses host language means. We tested the new version
of Dogelog runtime agains the current Scryer-Prolog version:
Preview: Dogelog Runtime beats Scryer Prolog by 30%. (Jekejeke) https://twitter.com/dogelogch/status/1434318760064790530
Preview: Dogelog Runtime beats Scryer Prolog by 30%. (Jekejeke) https://www.facebook.com/groups/dogelog
Although this is an encouraging result, it has also its drawbacks.
One might wish that a (=..)/2 and call/1 based implementation is
fast enough. Also a Prolog based implementation has the
advantage that it works with user defined predicates as evaluable
functions as well. We had a bridge in Jekejeke Prolog to allow that.
A next step is to bring this bridge also to Dogelog runtime, so that
user defined evaluable functions become possible as well.
Or invent something new to have the cake and eat it too.
The Standard Python version of Dogelog runtime
is annoyingly slow. So we gave it a try with
andother Python, and it was 6x times faster.
We could test GraalVM. We worked around the missing
match in Python 3.8 by replacing it with if-then-else.
Performance is a little better, we find:
/* Standard Python Version, Warm Run */
?- time(fibo(23,X)).
% Wall 3865 ms, gc 94 ms, 71991 lips
X = 46368.
/* GraalVM Python Version, Warm Warm Run */
?- time(fibo(23,X)).
% Wall 695 ms, gc 14 ms, 400356 lips
X = 46368.
See also:
JDK 1.8 GraalVM Python is 6x faster than Standard Python https://twitter.com/dogelogch/status/1437395917167112193
JDK 1.8 GraalVM Python is 6x faster than Standard Python https://www.facebook.com/groups/dogelog
Mostowski Collapse schrieb:
Dear All,
Needs a decent browser, JavaScript >2015
http://www.dogelog.ch/
Currently swallows errors silently. Everything written
in Prolog itself, read/1, consult/1, etc.. and then cross
compiled into JavaScript. Not sure whether it can already
compile itself. But it has a text field and can add the
clauses in the text field and execute the directives in the
text field, and it has write/1 and nl/0 into the HTML document.
More care for the good boy upcoming.
Have Fun!
Jan Burse, 24.05.2021 #StaySafe
http://www.jekejeke.ch/
Dear All,
Needs a decent browser, JavaScript >2015
http://www.dogelog.ch/
Currently swallows errors silently. Everything written
in Prolog itself, read/1, consult/1, etc.. and then cross
compiled into JavaScript. Not sure whether it can already
compile itself. But it has a text field and can add the
clauses in the text field and execute the directives in the
text field, and it has write/1 and nl/0 into the HTML document.
More care for the good boy upcoming.
Have Fun!
Jan Burse, 24.05.2021 #StaySafe
http://www.jekejeke.ch/
Opinion: Anyone who is counting on Python for truly fast computespeed is probably using Python for the wrong purpose. Here, we use
--- Joseph S.
Also got an idea for the shortest Albuferia instruction
code. We don't need separate op-codes for functor
, atomic and variable.
- Short form functor(A,F):
Instead of an op-code for functor/2, just put
the arity into the instruction stream.
- Short form atomic(F):
Instead of an op-code for atomic/1, we put
the arity 0, this makes it different from functor.
- Short form var(N):
Instead of an op-code for var/1, we put the
arity -1, this makes it different from functor and
atomic.
We could also use -2 and -3 for first_var/1 and
singleton/0. Problem is we have further instructions
arity/1 and zero/0. But we might set F=undefined
for this instructions and blow them up a little.
This would be practially an op-code less encoding
of Albuferia instruction code.
It breaks down if we encode more instructions.
But so far we haven tried to encode an argument,
here the arity, in an op-code itself.
Mostowski Collapse schrieb am Montag, 13. September 2021 um 14:45:18 UTC+2:
The Standard Python version of Dogelog runtime
is annoyingly slow. So we gave it a try with
andother Python, and it was 6x times faster.
We could test GraalVM. We worked around the missing
match in Python 3.8 by replacing it with if-then-else.
Performance is a little better, we find:
/* Standard Python Version, Warm Run */
?- time(fibo(23,X)).
% Wall 3865 ms, gc 94 ms, 71991 lips
X = 46368.
/* GraalVM Python Version, Warm Warm Run */
?- time(fibo(23,X)).
% Wall 695 ms, gc 14 ms, 400356 lips
X = 46368.
See also:
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://twitter.com/dogelogch/status/1437395917167112193
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://www.facebook.com/groups/dogelog
Mostowski Collapse schrieb:
Dear All,
Needs a decent browser, JavaScript >2015
http://www.dogelog.ch/
Currently swallows errors silently. Everything written
in Prolog itself, read/1, consult/1, etc.. and then cross
compiled into JavaScript. Not sure whether it can already
compile itself. But it has a text field and can add the
clauses in the text field and execute the directives in the
text field, and it has write/1 and nl/0 into the HTML document.
More care for the good boy upcoming.
Have Fun!
Jan Burse, 24.05.2021 #StaySafe
http://www.jekejeke.ch/
Oops "speed did never hurt anybody". Don't be
evil, I am talking about unarmed drones.
See also:
Drone Programming With Python Course https://www.youtube.com/watch?v=LmEcyQnfpDA
Mostowski Collapse schrieb:
I am not testing this use-case. But a related
use-case might highlight why speed did never
hurt anybody.
Lets say you program a flying drone with Python,
and the measurement is from the drone sensor
and communication systems.
Lets say you are using the idle time between
measurements for some complex planning. It
is then not true that you have anyway
to wait for the measurement.
Hope this helps!
BTW: If somebody knows another Python implementation
I am happy to test this implementation as well.
I am assuming that the standard Python python.exe
I tested amounts to CPython? Not sure. And the
GraalVM is practically the same as JPython? Not
sure either.
Opinion: Anyone who is counting on Python for truly fast computespeed is probably using Python for the wrong purpose. Here, we use
Python to control Test Equipment, to set up the equipment and ask for a measurement, get it, and proceed to the next measurement; and at the end produce a nice formatted report. If we wrote the test script in C or
Rust or whatever it could not run substantially faster because it is communicating with the test equipment, setting it up and waiting for responses, and that is where the vast majority of the time goes.
Especially if the measurement result requires averaging it can take a while. In my opinion this is an ideal use for Python, not just because
the speed of Python is not important, but also because we can easily
find people who know Python, who like coding in Python, and will join
the company to program in Python ... and stay with us.
--- Joseph S.
Mostowski Collapse schrieb:
Also got an idea for the shortest Albuferia instruction
code. We don't need separate op-codes for functor
, atomic and variable.
- Short form functor(A,F):
Instead of an op-code for functor/2, just put
the arity into the instruction stream.
- Short form atomic(F):
Instead of an op-code for atomic/1, we put
the arity 0, this makes it different from functor.
- Short form var(N):
Instead of an op-code for var/1, we put the
arity -1, this makes it different from functor and
atomic.
We could also use -2 and -3 for first_var/1 and
singleton/0. Problem is we have further instructions
arity/1 and zero/0. But we might set F=undefined
for this instructions and blow them up a little.
This would be practially an op-code less encoding
of Albuferia instruction code.
It breaks down if we encode more instructions.
But so far we haven tried to encode an argument,
here the arity, in an op-code itself.
Mostowski Collapse schrieb am Montag, 13. September 2021 um 14:45:18
UTC+2:
The Standard Python version of Dogelog runtime
is annoyingly slow. So we gave it a try with
andother Python, and it was 6x times faster.
We could test GraalVM. We worked around the missing
match in Python 3.8 by replacing it with if-then-else.
Performance is a little better, we find:
/* Standard Python Version, Warm Run */
?- time(fibo(23,X)).
% Wall 3865 ms, gc 94 ms, 71991 lips
X = 46368.
/* GraalVM Python Version, Warm Warm Run */
?- time(fibo(23,X)).
% Wall 695 ms, gc 14 ms, 400356 lips
X = 46368.
See also:
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://twitter.com/dogelogch/status/1437395917167112193
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://www.facebook.com/groups/dogelog
Mostowski Collapse schrieb:
Dear All,
Needs a decent browser, JavaScript >2015
http://www.dogelog.ch/
Currently swallows errors silently. Everything written
in Prolog itself, read/1, consult/1, etc.. and then cross
compiled into JavaScript. Not sure whether it can already
compile itself. But it has a text field and can add the
clauses in the text field and execute the directives in the
text field, and it has write/1 and nl/0 into the HTML document.
More care for the good boy upcoming.
Have Fun!
Jan Burse, 24.05.2021 #StaySafe
http://www.jekejeke.ch/
A friend just sent me a Web Sudoku made with Dogelog Runtime https://gist.github.com/jburse/c85297e97091caf22d306dd8c8be12fe#gistcomment-3895696
LoL
Mostowski Collapse schrieb am Mittwoch, 15. September 2021 um 16:08:54 UTC+2:
Oops "speed did never hurt anybody". Don't be
evil, I am talking about unarmed drones.
See also:
Drone Programming With Python Course https://www.youtube.com/watch?v=LmEcyQnfpDA
Mostowski Collapse schrieb:
I am not testing this use-case. But a related
use-case might highlight why speed did never
hurt anybody.
Lets say you program a flying drone with Python,
and the measurement is from the drone sensor
and communication systems.
Lets say you are using the idle time between
measurements for some complex planning. It
is then not true that you have anyway
to wait for the measurement.
Hope this helps!
BTW: If somebody knows another Python implementation
I am happy to test this implementation as well.
I am assuming that the standard Python python.exe
I tested amounts to CPython? Not sure. And the
GraalVM is practically the same as JPython? Not
sure either.
Opinion: Anyone who is counting on Python for truly fast computespeed is probably using Python for the wrong purpose. Here, we use
Python to control Test Equipment, to set up the equipment and ask for a measurement, get it, and proceed to the next measurement; and at the end produce a nice formatted report. If we wrote the test script in C or
Rust or whatever it could not run substantially faster because it is communicating with the test equipment, setting it up and waiting for responses, and that is where the vast majority of the time goes. Especially if the measurement result requires averaging it can take a while. In my opinion this is an ideal use for Python, not just because the speed of Python is not important, but also because we can easily
find people who know Python, who like coding in Python, and will join
the company to program in Python ... and stay with us.
--- Joseph S.
Mostowski Collapse schrieb:
Also got an idea for the shortest Albuferia instruction
code. We don't need separate op-codes for functor
, atomic and variable.
- Short form functor(A,F):
Instead of an op-code for functor/2, just put
the arity into the instruction stream.
- Short form atomic(F):
Instead of an op-code for atomic/1, we put
the arity 0, this makes it different from functor.
- Short form var(N):
Instead of an op-code for var/1, we put the
arity -1, this makes it different from functor and
atomic.
We could also use -2 and -3 for first_var/1 and
singleton/0. Problem is we have further instructions
arity/1 and zero/0. But we might set F=undefined
for this instructions and blow them up a little.
This would be practially an op-code less encoding
of Albuferia instruction code.
It breaks down if we encode more instructions.
But so far we haven tried to encode an argument,
here the arity, in an op-code itself.
Mostowski Collapse schrieb am Montag, 13. September 2021 um 14:45:18
UTC+2:
The Standard Python version of Dogelog runtime
is annoyingly slow. So we gave it a try with
andother Python, and it was 6x times faster.
We could test GraalVM. We worked around the missing
match in Python 3.8 by replacing it with if-then-else.
Performance is a little better, we find:
/* Standard Python Version, Warm Run */
?- time(fibo(23,X)).
% Wall 3865 ms, gc 94 ms, 71991 lips
X = 46368.
/* GraalVM Python Version, Warm Warm Run */
?- time(fibo(23,X)).
% Wall 695 ms, gc 14 ms, 400356 lips
X = 46368.
See also:
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://twitter.com/dogelogch/status/1437395917167112193
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://www.facebook.com/groups/dogelog
Mostowski Collapse schrieb:
Dear All,
Needs a decent browser, JavaScript >2015
http://www.dogelog.ch/
Currently swallows errors silently. Everything written
in Prolog itself, read/1, consult/1, etc.. and then cross
compiled into JavaScript. Not sure whether it can already
compile itself. But it has a text field and can add the
clauses in the text field and execute the directives in the
text field, and it has write/1 and nl/0 into the HTML document.
More care for the good boy upcoming.
Have Fun!
Jan Burse, 24.05.2021 #StaySafe
http://www.jekejeke.ch/
The new release 0.9.6 is quite speedy:
"Maailman vaikein" 850002400720000009004000000000107002305000900040000000000080070017000000000036040
time(solve(Puzzle))
% Wall 41354 ms, gc 520 ms, 3143029 lips
in Browser
See also:
Preview: New para/1 instruction for Dogelog runtime. (Jekejeke) https://twitter.com/dogelogch/status/1438586282502983682
Preview: New para/1 instruction for Dogelog runtime. (Jekejeke) https://www.facebook.com/groups/dogelog
Mostowski Collapse schrieb am Donnerstag, 16. September 2021 um 22:27:02 UTC+2:
A friend just sent me a Web Sudoku made with Dogelog Runtime https://gist.github.com/jburse/c85297e97091caf22d306dd8c8be12fe#gistcomment-3895696
LoL
Mostowski Collapse schrieb am Mittwoch, 15. September 2021 um 16:08:54 UTC+2:
Oops "speed did never hurt anybody". Don't be
evil, I am talking about unarmed drones.
See also:
Drone Programming With Python Course https://www.youtube.com/watch?v=LmEcyQnfpDA
Mostowski Collapse schrieb:
I am not testing this use-case. But a related
use-case might highlight why speed did never
hurt anybody.
Lets say you program a flying drone with Python,
and the measurement is from the drone sensor
and communication systems.
Lets say you are using the idle time between
measurements for some complex planning. It
is then not true that you have anyway
to wait for the measurement.
Hope this helps!
BTW: If somebody knows another Python implementation
I am happy to test this implementation as well.
I am assuming that the standard Python python.exe
I tested amounts to CPython? Not sure. And the
GraalVM is practically the same as JPython? Not
sure either.
Opinion: Anyone who is counting on Python for truly fast computespeed is probably using Python for the wrong purpose. Here, we use Python to control Test Equipment, to set up the equipment and ask for a measurement, get it, and proceed to the next measurement; and at the end
produce a nice formatted report. If we wrote the test script in C or Rust or whatever it could not run substantially faster because it is communicating with the test equipment, setting it up and waiting for responses, and that is where the vast majority of the time goes. Especially if the measurement result requires averaging it can take a while. In my opinion this is an ideal use for Python, not just because the speed of Python is not important, but also because we can easily find people who know Python, who like coding in Python, and will join the company to program in Python ... and stay with us.
--- Joseph S.
Mostowski Collapse schrieb:
Also got an idea for the shortest Albuferia instruction
code. We don't need separate op-codes for functor
, atomic and variable.
- Short form functor(A,F):
Instead of an op-code for functor/2, just put
the arity into the instruction stream.
- Short form atomic(F):
Instead of an op-code for atomic/1, we put
the arity 0, this makes it different from functor.
- Short form var(N):
Instead of an op-code for var/1, we put the
arity -1, this makes it different from functor and
atomic.
We could also use -2 and -3 for first_var/1 and
singleton/0. Problem is we have further instructions
arity/1 and zero/0. But we might set F=undefined
for this instructions and blow them up a little.
This would be practially an op-code less encoding
of Albuferia instruction code.
It breaks down if we encode more instructions.
But so far we haven tried to encode an argument,
here the arity, in an op-code itself.
Mostowski Collapse schrieb am Montag, 13. September 2021 um 14:45:18 >> UTC+2:
The Standard Python version of Dogelog runtime
is annoyingly slow. So we gave it a try with
andother Python, and it was 6x times faster.
We could test GraalVM. We worked around the missing
match in Python 3.8 by replacing it with if-then-else.
Performance is a little better, we find:
/* Standard Python Version, Warm Run */
?- time(fibo(23,X)).
% Wall 3865 ms, gc 94 ms, 71991 lips
X = 46368.
/* GraalVM Python Version, Warm Warm Run */
?- time(fibo(23,X)).
% Wall 695 ms, gc 14 ms, 400356 lips
X = 46368.
See also:
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://twitter.com/dogelogch/status/1437395917167112193
JDK 1.8 GraalVM Python is 6x faster than Standard Python
https://www.facebook.com/groups/dogelog
Mostowski Collapse schrieb:
Dear All,
Needs a decent browser, JavaScript >2015
http://www.dogelog.ch/
Currently swallows errors silently. Everything written
in Prolog itself, read/1, consult/1, etc.. and then cross
compiled into JavaScript. Not sure whether it can already
compile itself. But it has a text field and can add the
clauses in the text field and execute the directives in the
text field, and it has write/1 and nl/0 into the HTML document.
More care for the good boy upcoming.
Have Fun!
Jan Burse, 24.05.2021 #StaySafe
http://www.jekejeke.ch/
A nice project could now be to redo this one,
see how the prover would perform:
FLiP, a Logical Framework in Python http://staff.washington.edu/jon/flip/www/index.html
The prover is listed here:
Proof checkers - Joseph Vidal-Rosset https://www.vidal-rosset.net/proof_checkers.html
Ha Ha, the prover was just right in front of my nose
all the time. But before venturing into such a quest,
need to add occurs check to the Dogelog runtime.
I had some very good dynamic optimization for
occurs check in Jekejeke Prolog, but for Dogelog
runtime everything is new, need to figure out what
optimizations could be applied there. There is a ticket
for occurs check already on GitHub:
Bring occurs check to Dogelog runtime #143 https://github.com/jburse/dogelog-moon/issues/143
That I get censored on Python pipermail, is possibly an
out burst of taking Python too literal, like here:
Monty Python - She's a witch!
https://www.youtube.com/watch?v=zrzMhU_4m-g
But we can turn this into a FLiP, a Logical Framework in Python exercise:
"There are ways of telling whether she's a witch."
"What do you do with witches?" "Burn them!"
Ax.(Witch(x) -> Burn(x)) (1) Given
"Why do witches burn?" "'Cause they're made of wood!"
Ax.(Wood(x) -> Witch(x)) (2) Given
"How do we tell if she's made of wood?" "Does wood sink in water?" "It floats!"
Ax.(Floats(x) -> Wood(x)) (3) Given
"What also floats in water?" "A duck!"
Floats(duck) (4) Given
"Exactly! So, logically ..."
"If she weights the same as a duck, she's made of wood!"
Ax.Ay.((Floats(x) & (weight(x) = weight(y))) -> Floats(y)) (5) Given
"We shall use my largest scales. ... Remove the supports!"
weight(duck) = weight(girl) (6) Given
Ay.((Floats(duck) & (weight(duck) = weight(y))) -> Floats(y)) (7) A-Elimination (5)
(Floats(duck) & (weight(duck) = weight(girl))) -> Floats(girl) (8) A-Elimination (7)
Floats(duck) & (weight(duck) = weight(girl)) (9) And-Introduction (4) (6) Floats(girl) (10) Implication-Elimination (Modus Ponens) (8) (9) Floats(girl) -> Wood(girl) (11) A-Elimination (3)
Wood(girl) (12) Implication-Elimination (Modus Ponens) (11) (10)
"A witch! A witch!"
Wood(girl) -> Witch(girl) (13) A-Elimination (2)
Witch(girl) (14) Implication-Elimination (Modus Ponens) (13) (12)
"Burn her! Burn!"
Witch(girl) -> Burn(girl) (15) A-Elimination (1)
Burn(girl) (16) Implication-Elimination (Modus Ponens) (15) (14) http://staff.washington.edu/jon/flip/www/witch.html
Mostowski Collapse schrieb am Donnerstag, 23. September 2021 um 18:19:09 UTC+2:
A nice project could now be to redo this one,
see how the prover would perform:
FLiP, a Logical Framework in Python http://staff.washington.edu/jon/flip/www/index.html
The prover is listed here:
Proof checkers - Joseph Vidal-Rosset https://www.vidal-rosset.net/proof_checkers.html
Ha Ha, the prover was just right in front of my nose
all the time. But before venturing into such a quest,
need to add occurs check to the Dogelog runtime.
I had some very good dynamic optimization for
occurs check in Jekejeke Prolog, but for Dogelog
runtime everything is new, need to figure out what
optimizations could be applied there. There is a ticket
for occurs check already on GitHub:
Bring occurs check to Dogelog runtime #143 https://github.com/jburse/dogelog-moon/issues/143
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 294 |
Nodes: | 16 (2 / 14) |
Uptime: | 245:00:06 |
Calls: | 6,626 |
Calls today: | 2 |
Files: | 12,175 |
Messages: | 5,320,403 |