That example I posted elsewhere (a function returning a function) relies
on lexical binding in order to work properly.
Is lexical binding accepted as a standard part of Common Lisp yet? I know
it was built in from the start in Scheme. But some in the older Lisp community still seem to think dynamic binding is useful.
That example I posted elsewhere (a function returning a function) relies
on lexical binding in order to work properly.
Is lexical binding accepted as a standard part of Common Lisp yet?
I know
it was built in from the start in Scheme. But some in the older Lisp community still seem to think dynamic binding is useful.
I will never understand now these myths perpetuate.
Now you have me wondering about the scoping rules of Church's lambda calculus, or if the concept even applies. At first impression it seems lexical.
Assuming that the interpreter in the appendix of the Lisp 1.5 manual, I
would say that Lisp 1.5 was lexical.
I recall in McCarthy’s memoir of the origins of Lisp, he thought the >scoping issue (the “FUNARG problem”, I think he called it) was a simple >coding bug, and got one of his grad students to fix it.
Paul Rubin <no.email@nospam.invalid> writes:
... I never used Maclisp or Lisp 1.5 so I don't know how their
scoping worked.
MacLisp was weird. You typically debugged your program using the
MacLisp interpreter because it made the debugging cycle faster, and the interpreter was purely dynamicly scoped. But when the MacLisp compiler compiled your code, all the variables you hadn't declared SPECIAL became lexical! I know that sounds crazy, but because MacLisp didn't really
support closures, it wasn't too hard to write code in a style such that
it didn't really matter whether variables were dynamic or lexical.
Assuming that the interpreter in the appendix of the Lisp 1.5 manual, I
would say that Lisp 1.5 was lexical. But I'm not 100% certain that that interpreter really reflects the system's true semantics. (Does the
source code for the Lisp 1.5 system still exist anywhere?)
Now you have me wondering about the scoping rules of Church's lambda
calculus, or if the concept even applies. At first impression it seems
lexical.
It's definitely lexical, but you are right to wonder if the distinction
even applies. Our notion of "dynamic" scoping seems pretty closely tied
to the workings of the typical applicative order Lisp evaluator. I
imagine Church himself would be pretty puzzled by our notion of dynamic scoping...
Kaz Kylheku <433-929-6894@kylheku.com> writes:
Even though LC is lexical, I suspect that examples of LC that break
under dynamic scope have to be either incorrect, or convoluted.
Alan Bawden <alan@csail.mit.edu> writes:
I will never understand now these myths perpetuate.
Maybe people confuse Common Lisp with Emacs Lisp, which was historically purely dynamically scoped. I don't know if it is still that way, not counting Guile Emacs. I never used Maclisp or Lisp 1.5 so I don't know
how their scoping worked. But dynamic scope is convenient for
simple-minded implementations.
Now you have me wondering about the scoping rules of Church's lambda calculus, or if the concept even applies. At first impression it seems lexical.
Lexical bindings were desired for most of us but we neither had the
address space or memory to make effectively fast, usable
implementations.
Kaz Kylheku <433-929-6894@kylheku.com> writes:
Even though LC is lexical, I suspect that examples of LC that break
under dynamic scope have to be either incorrect, or convoluted.
I'm having trouble understanding what you wrote here because I can't
figure out for the life of me what you mean by "LC". At first I thought
you meant LC as short for Lambda Calculus, but later you start talking
about "normal LC" vs. "dynamic LC", and since I have no idea what
"dynamic lambda calculus" could be, I'm stumped.
It has been observed that a lot of Lisp code works fine if we subsitute dynamic binding for lexical.
On Mon, 15 Jan 2024 17:25:08 -0700, Jeff Barnett wrote:
Lexical bindings were desired for most of us but we neither had the
address space or memory to make effectively fast, usable
implementations.
Basically, call frames (or parts of them) go on the heap. I think Python manages some efficiency gains by only including referenced variables.
In the old days, Lisps were LOCALLY scoped and included special variables (dynamic scoped) too. Closures were available in some implementations for
a specified set of special variables. N.B. that by LEXICAL scope we normally mean that nested function definitions see local bindings where ever they are executed - not so for LOCAL binding.
In the old days, Lisps were LOCALLY scoped and included special variables
(dynamic scoped) too. Closures were available in some implementations for
a specified set of special variables. N.B. that by LEXICAL scope we normally >> mean that nested function definitions see local bindings where ever they are >> executed - not so for LOCAL binding.
See for example the definition of "closures" in the context of Lisp-Machine-Lisp at https://hanshuebner.github.io/lmman/fd-clo.xml#closure
Basically, they're functions that remember the value of dynbound
variables at the time that the closure was created and then rebind those dynbound vars to those values around the evaluation of their body.
In ELisp you could implement it as follows:Jeff Barnett
(oclosure-define (lml-closure
(:predicate lml-closurep))
bindings function)
(defun lml-closure (varlist function)
"Create a \"closure\" over the dynamic variables in VARLIST."
(oclosure-lambda
(lml-closure (bindings (mapcar (lambda (v) (cons v (symbol-value v)))
varlist))
(function function))
(&rest args)
(cl-progv (mapcar #'car bindings) (mapcar #'cdr bindings)
(apply function args))))--
On 1/16/2024 4:16 PM, Stefan Monnier wrote:
In the old days, Lisps were LOCALLY scoped and included special variables >>> (dynamic scoped) too. Closures were available in some implementations for >>> a specified set of special variables. N.B. that by LEXICAL scope we normally
mean that nested function definitions see local bindings where ever they are
executed - not so for LOCAL binding.
See for example the definition of "closures" in the context of
Lisp-Machine-Lisp at https://hanshuebner.github.io/lmman/fd-clo.xml#closure
Note that above I was talking about Lisps that predated the Lisp Machines.
Basically, they're functions that remember the value of dynbound
variables at the time that the closure was created and then rebind those
dynbound vars to those values around the evaluation of their body.
There was no "rebinding". Rather the original binding was used by all
who could see it vis a vis the scoping rules. Since multiple closures (including the one where that binding occurred) could reference the same lexical binding, creating multiple rebinding would screw the semantics.
On Wed, 17 Jan 2024 20:37:43 -0000 (UTC), Kaz Kylheku wrote:
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
I’m just a humble Python programmer, but it seems to me there are easier ways of doing such things: create and instantiate a class.
Then you can define methods that access and update the internal state, coupled with a method that says “do the actual work”. You can even manage that internal state via assignable “properties”, instead of explicit getter/setter method calls.
And for added flavour, if you define a “__call__()” method that does the actual work, you can call the class instance as though it were a function.
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
In ELisp you could implement it as follows ...
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
Kaz Kylheku wrote:
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
Didn't Interlisp do that with spaghetti stacks?
"One of the most innovative of the language extensions introduced by Interlisp was the spaghetti stack. The problem of retention (by
closures) of the dynamic function-definition environment in the presence
of special variables was never completely solved until spaghetti stacks
were invented." The Evolution of Lisp, Steele and Gabriel, 1993.
On Wed, 17 Jan 2024 20:37:43 -0000 (UTC), Kaz Kylheku wrote:
By the way, I once made a (proprietary, closed source) Lisp dialect
which had dynamic scope, but with proper closures!
I’m just a humble Python programmer, but it seems to me there are easier >ways of doing such things: create and instantiate a class.
Then you can define methods that access and update the internal state, >coupled with a method that says “do the actual work”. You can even manage >that internal state via assignable “properties”, instead of explicit >getter/setter method calls.
And for added flavour, if you define a “__call__()” method that does the >actual work, you can call the class instance as though it were a function.
Programming with closures is more like using "prototype OO". Prototype systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
On Thu, 18 Jan 2024 23:39:26 -0500, George Neuner wrote:
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
On Fri, 19 Jan 2024 05:42:33 -0000 (UTC), Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 18 Jan 2024 23:39:26 -0500, George Neuner wrote:
Programming with closures is more like using "prototype OO". Prototype
systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
You can clone/copy objects in Python, but - barring use of reflection
- you can't easily change an object's *declaration*: ie. its
properties (data fields) and methods (functions). Those are tied to
the object's class.
On Thu, 18 Jan 2024 23:39:26 -0500, George Neuner wrote:
Programming with closures is more like using "prototype OO". Prototype >>systems don't have classes, but rather ANY object can be modified toThat’s true of Python, too.
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
from types import *
However, when you modify an object with __addattr__ et al,
the object's *class* information is copied and then modified to create
a new anonymous class ...
On Fri, 19 Jan 2024 12:11:13 -0500, George Neuner wrote:
On Fri, 19 Jan 2024 05:42:33 -0000 (UTC), Lawrence D'Oliveiro
<ldo@nz.invalid> wrote:
On Thu, 18 Jan 2024 23:39:26 -0500, George Neuner wrote:
Programming with closures is more like using "prototype OO". Prototype >>>> systems don't have classes, but rather ANY object can be modified to
change its set of instance data and/or methods, and can be cloned to
create new objects of that same "type".
That’s true of Python, too.
You can clone/copy objects in Python, but - barring use of reflection
- you can't easily change an object's *declaration*: ie. its
properties (data fields) and methods (functions). Those are tied to
the object's class.
No they are not.
On Sat, 20 Jan 2024 19:20:46 -0500, George Neuner wrote:
However, when you modify an object with __addattr__ et al,
the object's *class* information is copied and then modified to create
a new anonymous class ...
Shut up already. And try this:
class ExampleClass :
def method(self) :
print("I am the true method.")
#end method
#end ExampleClass
def false_method() :
print("I am the impostor method.")
#end false_method
inst1 = ExampleClass()
inst2 = ExampleClass()
inst2.method = false_method
inst1.method()
inst2.method()
del inst2.method
inst2.method()
Output:
I am the true method.
I am the impostor method.
I am the true method.
On 20 Jan 2024 10:13:40 GMT, Stefan Ram wrote:
from types import *
Death to wildcard imports!
On Sat, 20 Jan 2024 19:20:46 -0500, George Neuner wrote:
However, when you modify an object with __addattr__ et al,
the object's *class* information is copied and then modified to create
a new anonymous class ...
Shut up already. And try this:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
...
Shut up already. And try this:
Calm down. The point that George was trying to make is that neither
Python nor Common Lisp is a prototype OO system. Prototype OO was once
(in the early 80s) a serious contender for how to design an
object-oriented language. Before Common Lisp had an object system there
were several proposals floating around, some of which were prototype
based. But I don't think any programming language ever actually wound
up using such a system.
>>> class Frob:
... def __len__(self):
... return 17 ...
>>> x = Frob()
>>> len(x)
17
>>> x.__len__()
17
>>> x.__len__ = lambda: 23 x.__len__()
23
>>> len(x)
17
Why didn't that work?
Why didn't that work?
Alan Bawden <alan@csail.mit.edu> writes:
Why didn't that work?Implicit invocations of special methods are only guaranteed
to work correctly if defined on an object's type, not in
the object's instance dictionary. (See: The Python Language
Reference, Release 3.13.0a0; 3.3.12 Special method lookup).
the Python standard library assumes class-based objects.)
... but I think it's pretty clear that Python is not prototype-based,
but class-based.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 437 |
Nodes: | 16 (2 / 14) |
Uptime: | 196:27:06 |
Calls: | 9,136 |
Calls today: | 3 |
Files: | 13,432 |
Messages: | 6,035,589 |