I finally understood what closures mean.
In Forth parlance it is a search order that is kept with an
Forth word. Each time the Forth word is invoked, the search
order is obeyed on top of the parameters that are passed.
I finally understood what closures mean.
In Forth parlance it is a search order that is kept with an
Forth word.
An environment is a wordlist,
I finally understood what closures mean.
In Forth parlance it is a search order that is kept with an
Forth word.
On 2023-08-14, albert@cherry.(none) (albert) <albert@cherry> wrote:
I finally understood what closures mean.
In Forth parlance it is a search order that is kept with an
Forth word.
It seems as if you emigrated to Forth Island decades ago and now you
need everything translated into Forthese.
On 15/08/2023 1:27 am, Kaz Kylheku wrote:
On 2023-08-14, albert@cherry.(none) (albert) <albert@cherry> wrote:
I finally understood what closures mean.
In Forth parlance it is a search order that is kept with an
Forth word.
It seems as if you emigrated to Forth Island decades ago and now you
need everything translated into Forthese.
Some concepts that abound are baffling and need translating.
OTOH forth afficionados come up with novelties too :)
albert@cherry.(none) (albert) writes:I'm an amateur. I implement MAL. I don't know whether that is a dynamically-scoped Lisp. Actually my goal is to prove that
I finally understood what closures mean.
In Forth parlance it is a search order that is kept with an
Forth word.
That view leads down to boy compilers (in Knuth's man-or-boy test),
and at best (if you save and restore the variables on function entry
and exit) dynamically-scoped Lisp.
Okay, so we have multiple instance of a wordlist.An environment is a wordlist,
A wordlist has only one instance of each word in a wordlist.
An environment in a statically-scoped language is a set of localThat would mean that
frames, where each local frame is created dynamically when the
function to which the frame belongs is called. So if a function has
two instances at the same time (e.g., in recursion), a wordlist is >insufficient.
((fn* [q] (quasiquote ((unquote q) (quote (unquote q))))) (quote (fn* [q] (quasiquote ((unquote q) (quote (unquote q)))))))
You can use wordlists to store the offsets of variable within theI think I do with the proposed mechanism.
frames, but you have to manage the frames separately.
- anton--
That would mean that
(def! fib (fn* (N) (if (= N 0) 1
(if (= N 1) 1 (+ (fib (- N 1))
(fib (- N 2)))))))
would fail, but it isn't:
(fib 10)
On 2023-08-15, albert@cherry.(none) (albert) <albert@cherry> wrote:
That would mean that
(def! fib (fn* (N) (if (= N 0) 1
(if (= N 1) 1 (+ (fib (- N 1))
(fib (- N 2)))))))
would fail, but it isn't:
(fib 10)
fib is not capturing lexical closures. There are multiple instances, but
they are not accessible at the same time. In particular, once any
activation of fib terminates, nothing accesses that instance of N any
more.
Lexical closures mean that the variables visible in any activation of a >function can live indefinitely. Lexical closures are objects that canThis is lisp parlance. The concept of lexical closures is present
escape from the context where they are created, and be invoked after
that context has terminated.
Mastodon: @Kazinator@mstdn.ca
On 2023-08-15, albert@cherry.(none) (albert) <albert@cherry> wrote:
That would mean thatfib is not capturing lexical closures. There are multiple instances, but
(def! fib (fn* (N) (if (= N 0) 1
(if (= N 1) 1 (+ (fib (- N 1))
(fib (- N 2)))))))
would fail, but it isn't:
(fib 10)
they are not accessible at the same time. In particular, once any
activation of fib terminates, nothing accesses that instance of N any
more.
We can write fib in C, whose local variables turn to pixie dust when the block scope ends, allowing a simple stack to be used for locals.
Lexical closures mean that the variables visible in any activation of a function can live indefinitely. Lexical closures are objects that can
escape from the context where they are created, and be invoked after
that context has terminated.
minforth@gmx.net (minforth) writes:
My quotation "model" with access to upvalues works here.
Useful now and then, but they cannot pass the man-or-boy test.
Why not?
What use did you find for them?
I always thought that closures are invented by lispers, because
in lisp you cannot write normal programs.
I find this extremely interesting. The Pascal specification is
clear. I'd not thought that - for me - an obscure feature as
closures is present in Pascal, much less that you could remove
this feature.
albert@spenarnc.xs4all.nl writes:
I always thought that closures are invented by lispers, because
in lisp you cannot write normal programs.
Lisp didn't get them until fairly late in its evolution, I think. Maybe
old time Lispers here would know. But I think Scheme was the first
dialect that really made use of them.
Algol 60 its own version much
earlier, in the form of call-by-name parameters.
albert@spenarnc.xs4all.nl wrote:
I find this extremely interesting. The Pascal specification is
clear. I'd not thought that - for me - an obscure feature as
closures is present in Pascal, much less that you could remove
this feature.
IIRC Pascal had nested functions from the beginning, unlike C
where they still don't exist. Closures are especially interesting
for functional programming. Since Forth is typeless and treats
execution tokens, addresses and numbers the same, functional
programming is perhaps not as interesting as in other languages.
albert@spenarnc.xs4all.nl writes:
I always thought that closures are invented by lispers, because
in lisp you cannot write normal programs.
Lisp didn't get them until fairly late in its evolution, I think. Maybe
old time Lispers here would know. But I think Scheme was the first
dialect that really made use of them. Algol 60 its own version much
earlier, in the form of call-by-name parameters.
Jenkin's device, call-by-name parameters. That was an obscure feature
got in algol60 their by accident.
That was a form of closure?
But I thought call-by-name was not at all
accidental.
I would say that it is,
but in Algol-60 I guess it can be stack
allocated, unlike Scheme closures which have to be on the heap.
Since Forth is typeless
and treats
execution tokens, addresses and numbers the same, functional
programming is perhaps not as interesting as in other languages.
is the HOPL paper) says that they just wanted an elegant specification
that (I think) supports in-out semantics. What they wrote down was call-by-name, but they were not aware of all the consequences when
they wrote it.
but in Algol-60 I guess it can be stack allocated, unlike Scheme
closures which have to be on the heap.
Sophisticated Scheme compilers can determine when they can reside on
the stack.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
is the HOPL paper) says that they just wanted an elegant specification
that (I think) supports in-out semantics. What they wrote down was
call-by-name, but they were not aware of all the consequences when
they wrote it.
I don't remember Algol syntax but I had thought using call-by-name as a
cheap inline function was idiomatic in it. E.g. to add up the first n >squares, you could say
a = sum(i, 1, n, i*i)
Sure, but Algol-60 didn't create the possibility of having to heap
allocate anything. So it avoided needing GC, which would have been a
big minus in that era. Lisp existed then but idk if it was actually
used for anything outside of research.
It may have become idiomatic after Jensen's device became well-known.
That does not mean that it was intended.
However, I don't think that it became idiomatic, because if it had
become idiomatic, the successor languages of Algol 60 would have
supported call by name,
The intention for [Lisp] was lexical scoping, but the implementation
used dynamic scoping. ... Eventually Common Lisp (started 1981,
released 1984) added a separate syntax for lexical scoping to
mainstream Lisp, but that was more than two decades after dynamically
scoped Lisp had been implemented and become idiomatic.
Another case is the story of S-expressions vs. (Algol- or ML-like) M-expressions in Lisp.
And yet, Lisp had so much existing code by the time the scoping implementation was discovered as being buggy that they could not fix
it.
In any case, call-by-name does not appear in any later languages that
I have ever heard of.
Paul Rubin <no.email@nospam.invalid> writes: >>anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
is the HOPL paper) says that they just wanted an elegant specification
that (I think) supports in-out semantics. What they wrote down was
call-by-name, but they were not aware of all the consequences when
they wrote it.
I don't remember Algol syntax but I had thought using call-by-name as a >>cheap inline function was idiomatic in it. E.g. to add up the first n >>squares, you could say
a = sum(i, 1, n, i*i)
It may have become idiomatic after Jensen's device became well-known.
That does not mean that it was intended.
However, I don't think that it became idiomatic, because if it had
become idiomatic, the successor languages of Algol 60 would have
supported call by name, maybe as default, or maybe as a special option
for passing parameters (syntactically similar to the VAR parameters in >Pascal). None of that happened.
If you want to see what happens if something becomes idiomatic, look
at Lisp: The intention for the language was lexical scoping, but the >implementation used dynamic scoping. By the time this was recognized
as a bug, enough programs had been written that relied on dynamic
scoping and enough programmers had become accustomed to this behaviour
that they could not just fix it, but instead used a workaround (the
FUNARG device) when they wanted to have lexical-scoping semantics.
Eventually Common Lisp (started 1981, released 1984) added a separate
syntax for lexical scoping to mainstream Lisp, but that was more than
two decades after dynamically scoped Lisp had been implemented and
become idiomatic.
Another case is the story of S-expressions vs. (Algol- or ML-like) >M-expressions in Lisp.
Sure, but Algol-60 didn't create the possibility of having to heap
allocate anything. So it avoided needing GC, which would have been a
big minus in that era. Lisp existed then but idk if it was actually
used for anything outside of research.
And yet, Lisp had so much existing code by the time the scoping >implementation was discovered as being buggy that they could not fix
it. Algol-60 has been described as a publication language, so maybe
there was actually more running Lisp code around than Algol-60 code.
Sure, Burroughs used Algol-60 for their large systems, but they and
their customers did not like Jensen's device themselves, or they did
not participate in the development of other programming languages that >received any scrutiny in language design discussions. In any case, >call-by-name does not appear in any later languages that I have ever
heard of.
- anton--
In algol68 the unclean Jensen's device was replaced by references
once it was realized what it was. This permitted the same code,
without the mystification.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
However, I don't think that it became idiomatic, because if it had
become idiomatic, the successor languages of Algol 60 would have
supported call by name,
I don't see this implication. It could be idiomatic and simultaneously
been considered a bad idea.
The intention for [Lisp] was lexical scoping, but the implementation
used dynamic scoping. ... Eventually Common Lisp (started 1981,
released 1984) added a separate syntax for lexical scoping to
mainstream Lisp, but that was more than two decades after dynamically
scoped Lisp had been implemented and become idiomatic.
Scheme had lexical scope in the late 1970s and I believe it appeared in
some Lisps earlier than Common Lisp, but that was before my time.
There is also the matter that dynamic
scope is very easy to implement, so that might have affected what people
did.
Another case is the story of S-expressions vs. (Algol- or ML-like)
M-expressions in Lisp.
M-expressions never caught on because Lispers liked S-expressions.
In any case, call-by-name does not appear in any later languages that
I have ever heard of.
https://www.geeksforgeeks.org/scala-functions-call-by-name/
albert@spenarnc.xs4all.nl writes:
In algol68 the unclean Jensen's device was replaced by references
once it was realized what it was. This permitted the same code,
without the mystification.
Whatever you may mean by "unclean" and "mystification", looking at ><https://rosettacode.org/wiki/Jensen%27s_Device#ALGOL_68> shows the
header of sum to be
PROC sum = (REF INT i, INT lo, hi, PROC REAL term)REAL:
which is then called with
sum (i, 1, 100, REAL: 1/i)
So i is passed by REFerence, lo and hi are passed by value (the
default in Algol 68, while call-by-name is the default in Algol 60),--
and 1/i is passed as PROC (the implementation mechanism behind
call-by-name). Try using "REF" instead of "PROC", and see that the
program does not work as intended (if it compiles at all).
- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html >comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: https://forth-standard.org/
EuroForth 2023: https://euro.theforth.net/2023
What I see in the Algol 60, Algol W, Algol 68, and Pascal entries of <https://rosettacode.org/wiki/Jensen's_Device> is a steady progression
away from call-by-name (and towards call-by-value):
In Algol 68, i is passed by-REFerence, lo and hi are passed by-value
(the default in Algol 68), and term is passed by-PROCedure (probably
like the procedure mode of Algol 60).
Common Lisp requires implementing both dynamic scoping (using the '
syntax) and static scoping (IIRC using the #' syntax).
Dynamic scoping does not make the implementation of Common Lisp
easier. It's there because programs were written to work with dynamic scoping. So many programs that eliminating it and switching to Scheme
was impractical.
[Scala] the supposed call-by-name is actually restricted like the procedure/PROC modes of Algol W and Algol 68: It cannot be used for
passing i, and therefore some extra work was done to pass i in a way
that is modifyable.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
What I see in the Algol 60, Algol W, Algol 68, and Pascal entries of
<https://rosettacode.org/wiki/Jensen's_Device> is a steady progression
away from call-by-name (and towards call-by-value):
It sounds like Algol 68 and Pascal (don't know about Algol W) had a way
to pass procedures as parameters. That was missing from Algol 60 so the
only way to get that effect was call by name. E.g. if you want to write
a root finder that finds a zero of some function, how do you pass the >function?
It's weird though. Function parameters were straightforward and
important in FORTRAN, which predated Algol 60, so I'd expect the Algol
60 designers to have known better. It does sound from Naur's report
that not all of the Algol 60 committee knew what it was getting into
with call by name.
In Algol 68, i is passed by-REFerence, lo and hi are passed by-value
(the default in Algol 68), and term is passed by-PROCedure (probably
like the procedure mode of Algol 60).
Algol 60 had a procedure mode?
[Scala] the supposed call-by-name is actually restricted like the
procedure/PROC modes of Algol W and Algol 68: It cannot be used for
passing i, and therefore some extra work was done to pass i in a way
that is modifyable.
It's maybe similar in Haskell, whose lazy evaluation is supposed to be >semantically equivalent to call-by-name, but which is memoized, doable >because mutation is not allowed.
The lazy evaluation of Haskell is certainly not equivalent to
call-by-name, because you cannot implement Jensen's device with it.
<https://rosettacode.org/wiki/Jensen%27s_Device#Haskell> does
something with monads that I don't understand
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
The lazy evaluation of Haskell is certainly not equivalent to
call-by-name, because you cannot implement Jensen's device with it.
Jensen's device depends on having mutable variables. If they are not >allowed, Haskell's evaluation is equivalent to call-by-name.
<https://rosettacode.org/wiki/Jensen%27s_Device#Haskell> does
something with monads that I don't understand
It creates a mutable memory cell (STRef) that is read and written as if
it's an i/o device (readSTRef/writeSTRef). Then it makes a closure that >reads from the cell and returns the reciprocal of the contents, and
sums over the result of that closure when it writes k=1,2...n into the
cell. Ugh!!!
In article <87zfuvxfer.fsf@nightsong.com>,
Paul Rubin <no.email@nospam.invalid> wrote:
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
The lazy evaluation of Haskell is certainly not equivalent to
call-by-name, because you cannot implement Jensen's device with it.
Jensen's device depends on having mutable variables. If they are not >>allowed, Haskell's evaluation is equivalent to call-by-name.
<https://rosettacode.org/wiki/Jensen%27s_Device#Haskell> does
something with monads that I don't understand
It creates a mutable memory cell (STRef) that is read and written as if >>it's an i/o device (readSTRef/writeSTRef). Then it makes a closure that >>reads from the cell and returns the reciprocal of the contents, and
sums over the result of that closure when it writes k=1,2...n into the >>cell. Ugh!!!
It serves in my opinion that Jensen's is serependitious technique that
is only used as there are no better techniques available.
There is no discretion at Rosetta, otherwise this example would thrown
out for being not appropriate for this forum to show techniques.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
The lazy evaluation of Haskell is certainly not equivalent to
call-by-name, because you cannot implement Jensen's device with it.
Jensen's device depends on having mutable variables. If they are not >allowed, Haskell's evaluation is equivalent to call-by-name.
<https://rosettacode.org/wiki/Jensen%27s_Device#Haskell> does
something with monads that I don't understand
It creates a mutable memory cell (STRef) that is read and written as if
it's an i/o device (readSTRef/writeSTRef). Then it makes a closure that >reads from the cell and returns the reciprocal of the contents, and
sums over the result of that closure when it writes k=1,2...n into the
cell. Ugh!!!
My notion as well. If you look at closures eg in Javascript, it seems that >the main benefit of closures is the easy handling of private methods and >variables without opening a big barrel like OOP.
The other way round: if you already have OO in your language, you don't
need closures.
Jensen's device or Knuth's man-or-boy are just programming playgrounds.
minforth@gmx.net (minforth) writes:
My notion as well. If you look at closures eg in Javascript, it seems that >>the main benefit of closures is the easy handling of private methods and >>variables without opening a big barrel like OOP.
The main benefit of closures is that you can pass data to a callback
that is not provided for by the interface of tha callback.
OOP does not provide this capability, so "opening a big barrel like OOP" would
not help.
It serves in my opinion that Jensen's is serependitious technique that
is only used as there are no better techniques available.
And if it's required to use every argument, then eager evaluation (call-by-value) is equivalent to Haskell's lazy evaluation.
The way I have heard about Haskell programming up to now is that one
tries to have a pure functional part, and then use monads at the
fringes for things like I/O where pure functional code does not cut
it. I don't see this kind of separation in the Haskell code above.
I wonder if there is a more idiomatic way of writing this stuff in
Haskell.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 52:34:44 |
Calls: | 6,712 |
Files: | 12,243 |
Messages: | 5,355,180 |