Which is preferred and why?
\ Bump blank lines
: ?BUMP ( u -- )
if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
dup ?bump ...
or
\ Bump blank lines
: ?BUMP ( u -- u )
dup if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
?bump ...
\ Bump blank lines
: ?BUMP ( u -- )
if blanks off end 1 blanks +! ;
\ Write a line--
: PUT ( a u -- )
dup ?bump ...
or
\ Bump blank lines
: ?BUMP ( u -- u )
dup if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
?bump ...
On Monday, April 24, 2023 at 1:26:43 AM UTC-4, dxforth wrote:the code to manage this inside the word, rather than duplicate it each time the word is used.
Which is preferred and why?
\ Bump blank lines
: ?BUMP ( u -- )
if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
dup ?bump ...
or
\ Bump blank lines
: ?BUMP ( u -- u )
dup if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
?bump ...
It all just seems simpler to let the words consume the arguments, unless there is a specific reason to do otherwise. One would be because the use always requires the same parameters after the word. I do this sometimes. It just seems simpler to put
I suppose some would say I'm not doing a good job of factoring.
Which is preferred and why?
Which is preferred and why?I think if there's no real gain to do otherwise, then
it is better to stick to the rule.
Of course the rule may be broken if the result is shorter,
more comprehensible, or just faster executing program.
case on its own merits?If a rule doesn't provide some advantage, then why have it? I suppose there is a small advantage simply from being consistent. But, if you break the rule for cases where there's an advantage, how is that different from having no rule and doing eachWhich is preferred and why?I think if there's no real gain to do otherwise, then
it is better to stick to the rule.
Of course the rule may be broken if the result is shorter,
more comprehensible, or just faster executing program.
Which is preferred and why?
I think if there's no real gain to do otherwise, then
it is better to stick to the rule.
Of course the rule may be broken if the result is shorter,
more comprehensible, or just faster executing program.
case on its own merits?If a rule doesn't provide some advantage, then why have it? I suppose there is a small advantage simply from being consistent. But, if you break the rule for cases where there's an advantage, how is that different from having no rule and doing eachWhich is preferred and why?I think if there's no real gain to do otherwise, then
it is better to stick to the rule.
Of course the rule may be broken if the result is shorter,
more comprehensible, or just faster executing program.
I will answer using an analogy: in the photography, to create a nice (or even beautiful) picture, you need to know (and to apply) a few rules of composition. Of course there are situations when you decide, that you'll get better picture when you breakparticular rule of composition!
…but it should be your *sane* decision — not the lack of knowledge. You won't get nice picture by not learning anything at all.preferred such behavior — I'm not sure, but surely it's an example of freedom Forth programmer has at his disposal; like creating the compiler that acts differently, „just like that”.
Why sticking to the rule pays off? By being consistent you keep your program comprehensible at least for you (if you are willing to expand its functionality a year later, or simply to fix it).
But yes, you may decide you don't like the rule, if it doesn't suit you that well; for example: there's a good Forth83 compiler for C-64 called VolksForth, which DOESN'T clear the stack after you mistype something in the command line. Why they
On Monday, April 24, 2023 at 4:37:11 PM UTC-4, Zbig wrote:case on its own merits?
If a rule doesn't provide some advantage, then why have it? I suppose there is a small advantage simply from being consistent. But, if you break the rule for cases where there's an advantage, how is that different from having no rule and doing eachWhich is preferred and why?I think if there's no real gain to do otherwise, then
it is better to stick to the rule.
Of course the rule may be broken if the result is shorter,
more comprehensible, or just faster executing program.
particular rule of composition!I will answer using an analogy: in the photography, to create a nice (or even beautiful) picture, you need to know (and to apply) a few rules of composition. Of course there are situations when you decide, that you'll get better picture when you break
preferred such behavior — I'm not sure, but surely it's an example of freedom Forth programmer has at his disposal; like creating the compiler that acts differently, „just like that”.…but it should be your *sane* decision — not the lack of knowledge. You won't get nice picture by not learning anything at all.
Why sticking to the rule pays off? By being consistent you keep your program comprehensible at least for you (if you are willing to expand its functionality a year later, or simply to fix it).
But yes, you may decide you don't like the rule, if it doesn't suit you that well; for example: there's a good Forth83 compiler for C-64 called VolksForth, which DOESN'T clear the stack after you mistype something in the command line. Why they
There are many, many things in programming where there is no rule. We tolerate inconsistency in those things. The question is, why does consistency matter more with duplicating arguments, than with other things?
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?
On 25/04/2023 5:23, Lorem Ipsum wrote:case on its own merits?
On Monday, April 24, 2023 at 4:37:11 PM UTC-4, Zbig wrote:
If a rule doesn't provide some advantage, then why have it? I suppose there is a small advantage simply from being consistent. But, if you break the rule for cases where there's an advantage, how is that different from having no rule and doing eachWhich is preferred and why?I think if there's no real gain to do otherwise, then
it is better to stick to the rule.
Of course the rule may be broken if the result is shorter,
more comprehensible, or just faster executing program.
break particular rule of composition!I will answer using an analogy: in the photography, to create a nice (or even beautiful) picture, you need to know (and to apply) a few rules of composition. Of course there are situations when you decide, that you'll get better picture when you
preferred such behavior — I'm not sure, but surely it's an example of freedom Forth programmer has at his disposal; like creating the compiler that acts differently, „just like that”.…but it should be your *sane* decision — not the lack of knowledge. You won't get nice picture by not learning anything at all.
Why sticking to the rule pays off? By being consistent you keep your program comprehensible at least for you (if you are willing to expand its functionality a year later, or simply to fix it).
But yes, you may decide you don't like the rule, if it doesn't suit you that well; for example: there's a good Forth83 compiler for C-64 called VolksForth, which DOESN'T clear the stack after you mistype something in the command line. Why they
There are many, many things in programming where there is no rule. We tolerate inconsistency in those things. The question is, why does consistency matter more with duplicating arguments, than with other things?
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?Some reasons to enforce consistency include, but are not limited to:
- naming conventions can help you understand the purpose of a word or variable without needing to refer to documentation
- when you return to the code after an extended absence, the above makes
the purpose of the code more clear
- you work with others on the same code-base; consistency means less
time trying to unravel others' code
- working with a set of rules means you spend less time working out
naming / spacing / commenting conventions, and just follow the script in your head
- consistent spacing rules make the layout of the code easier to follow.
etc...
On 25/04/2023 5:23, Lorem Ipsum wrote:case on its own merits?
On Monday, April 24, 2023 at 4:37:11 PM UTC-4, Zbig wrote:
If a rule doesn't provide some advantage, then why have it? I suppose there is a small advantage simply from being consistent. But, if you break the rule for cases where there's an advantage, how is that different from having no rule and doing eachWhich is preferred and why?I think if there's no real gain to do otherwise, then
it is better to stick to the rule.
Of course the rule may be broken if the result is shorter,
more comprehensible, or just faster executing program.
break particular rule of composition!I will answer using an analogy: in the photography, to create a nice (or even beautiful) picture, you need to know (and to apply) a few rules of composition. Of course there are situations when you decide, that you'll get better picture when you
preferred such behavior — I'm not sure, but surely it's an example of freedom Forth programmer has at his disposal; like creating the compiler that acts differently, „just like that”.…but it should be your *sane* decision — not the lack of knowledge. You won't get nice picture by not learning anything at all.
Why sticking to the rule pays off? By being consistent you keep your program comprehensible at least for you (if you are willing to expand its functionality a year later, or simply to fix it).
But yes, you may decide you don't like the rule, if it doesn't suit you that well; for example: there's a good Forth83 compiler for C-64 called VolksForth, which DOESN'T clear the stack after you mistype something in the command line. Why they
There are many, many things in programming where there is no rule. We tolerate inconsistency in those things. The question is, why does consistency matter more with duplicating arguments, than with other things?
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?Some reasons to enforce consistency include, but are not limited to:
- naming conventions can help you understand the purpose of a word or variable without needing to refer to documentation
- when you return to the code after an extended absence, the above makes
the purpose of the code more clear
- you work with others on the same code-base; consistency means less
time trying to unravel others' code
- working with a set of rules means you spend less time working out
naming / spacing / commenting conventions, and just follow the script in your head
- consistent spacing rules make the layout of the code easier to follow.
etc...
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?
there's a good Forth83 compiler for C-64 called VolksF=
orth, which DOESN'T clear the stack after you mistype something in the comm= >and line. Why they preferred such behavior =E2=80=94 I'm not sure
Which is preferred and why?
\ Bump blank lines
: ?BUMP ( u -- )
if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
dup ?bump ...
or
\ Bump blank lines
: ?BUMP ( u -- u )
dup if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
?bump ...
On Tuesday, April 25, 2023 at 12:31:06 AM UTC-4, Ron AARON wrote:
On 25/04/2023 5:23, Lorem Ipsum wrote:
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?Some reasons to enforce consistency include, but are not limited to:
- naming conventions can help you understand the purpose of a word or
variable without needing to refer to documentation
- when you return to the code after an extended absence, the above makes
the purpose of the code more clear
- you work with others on the same code-base; consistency means less
time trying to unravel others' code
- working with a set of rules means you spend less time working out
naming / spacing / commenting conventions, and just follow the script in
your head
- consistent spacing rules make the layout of the code easier to follow.
etc...
You are talking about being consistent in specific purposes. Naming conventions have some rationale.
My point is that I haven't heard anyone make a rationale regarding the issue of when to let data remain on the stack.
dxforth <dxforth@gmail.com> writes:
Which is preferred and why?
\ Bump blank lines
: ?BUMP ( u -- )
if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
dup ?bump ...
or
\ Bump blank lines
: ?BUMP ( u -- u )
dup if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
?bump ...
If you don't know a good reason for the second variant, the first is preferred, as you are aware of. That's even true when the code for
such an example would become shorter when you break the convention
(which is not the case in this example; it just moves the DUP between
the words).
dxforth <dxforth@gmail.com> writes:
Which is preferred and why?
\ Bump blank lines
: ?BUMP ( u -- )
if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
dup ?bump ...
or
\ Bump blank lines
: ?BUMP ( u -- u )
dup if blanks off end 1 blanks +! ;
\ Write a line
: PUT ( a u -- )
?bump ...
If you don't know a good reason for the second variant, the first is >preferred, as you are aware of. That's even true when the code for
such an example would become shorter when you break the convention
(which is not the case in this example; it just moves the DUP between
the words).
That being said, there have been rare cases where I have not followed
this convention, and have instead left the stack alone, mainly for
words that one may want to insert or delete without worrying about the
stack contents. Even the standard has such a word: .S. Maybe ?BUMP
is one of these words, too?
- anton
One thing that one could do on an undefined-word error is to save the
stack contents and the line starting at the undefined word, and have a
key that restores the stack contents and puts the line starting at the >undefined word up for editing, and places the cursor at the undefined
word.
- anton--
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?Consistency adds more order — and order removes (at least part of) the chore,
when you know „what to expect”. You expect the word to consume its argument(s).
And why the rule of „consuming arguments”? I believe this comes from
the fact, that the primitives consume them. So — exactly to keep it consistent
way — came the rule, also for „higher level” words.
And actually by making the words consuming their arguments we get better, more natural-looking code; which is better IYO: „4 STARS” or „4 STARS DROP”?
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?Consistency adds more order — and order removes (at least part of) the chore,
when you know „what to expect”. You expect the word to consume its argument(s).
And why the rule of „consuming arguments”? I believe this comes from the fact, that the primitives consume them. So — exactly to keep it consistentOr...
way — came the rule, also for „higher level” words.
And actually by making the words consuming their arguments we get better, more natural-looking code; which is better IYO: „4 STARS” or „4 STARS DROP”?
2dup foobar vs. foobar
???
See, the knife cuts both ways.
“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."
So, have a reason for the rule, or don't have the rule.
Sorry, but I don't put much stock in consistency for the sake of consistency. If there is literally no identifiable reason to enforce consistency, then it's just extra work. Am I not making myself clear?Consistency adds more order — and order removes (at least part of) the chore,
when you know „what to expect”. You expect the word to consume its argument(s).
And why the rule of „consuming arguments”? I believe this comes from the fact, that the primitives consume them. So — exactly to keep it consistentOr...
way — came the rule, also for „higher level” words.
And actually by making the words consuming their arguments we get better,
more natural-looking code; which is better IYO: „4 STARS” or „4 STARS DROP”?
2dup foobar vs. foobarIt's not „or” — try this still using STARS.
: STARS ( n -- ) 0 DO 42 EMIT LOOP ;
???
See, the knife cuts both ways.But I don't need any 2dup using STARS. Do you?
“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."
So, have a reason for the rule, or don't have the rule.An average „cliche” isn't any rule.
Sorry, but if you want to make a point, you need to actually explain the point you are trying to make.
“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."
Hmmm... it's as good as any of the logic you have applied, and comes from an authority.So, have a reason for the rule, or don't have the rule.An average „cliche” isn't any rule.
I'm tempted to define a repl called BREAK and then define
: QUIT CLS BREAK ;
BREAK is a factor that you can insert in a word that
you really don't know to debug.
(Nested break's can end by a ^D).
“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."
So, have a reason for the rule, or don't have the rule.
An average „cliche” isn't any rule.
"Tip: Let definitions consume their arguments." - Thinking FORTH.
The example Brodie gives is DUP positioned in the wrong place resulting in extraneous
execution and need for DROP. If there's a rule in here it would be avoid writing
unnecessary code...
"Tip: Let definitions consume their arguments." - Thinking FORTH.
The example Brodie gives is DUP positioned in the wrong place resulting in extraneous
execution and need for DROP. If there's a rule in here it would be avoid writing
unnecessary code...
Regardless of Brodie example's value the rule was rather obvious to me.
In Forth the stack is used for passing the arguments, correct? Then if in case
of „infix” programming languages the arguments are just transferred from one
command to another — not remaining „somewhere in the air” after the job is
done — why the Forth words should act differently?
As I already wrote: IMHO only when there's a good reason for this. But long ago
I've experienced that leaving too many values on the stack „for future use” in most
cases means later troublesome access to them (causing „stack noise”) and that
better is to use a few variables (or memory array) as a storage.
Of course your initial example is so simplistic, that it can be written one way, or
another; it doesn't make that much of difference in this particular case.
In those infix languages (presumably you mean C) are not those input parameters
which end up in locals but copies?
But it's not we're talking about — not about how that data is then used, but about
leaving (or not leaving) the copies of input parameters for ev. „future use”.
These locals are inside of the „acceptor”.
I don't believe I had a choice doing it the way I finally did. Readability and least
code largely forced my hand - which is the way I like it. Having a choice one may
just as well flip a coin.
It is up to the programmer to select the appropriate criterion.
In those infix languages (presumably you mean C) are not those input parameters
which end up in locals but copies?
I don't believe I had a choice doing it the way I finally did. Readability and least
code largely forced my hand - which is the way I like it. Having a choice one may
just as well flip a coin.
Should the next function needs the same parameters it gets them from the source.
In forth it can get them from the stack.
Yep — „should the next function”.
The application has criteria. I'm not sure the programmer does.I don't believe I had a choice doing it the way I finally did. Readability and least
code largely forced my hand - which is the way I like it. Having a choice one may
just as well flip a coin.
It is up to the programmer to select the appropriate criterion.
I believe no serious application relies on result of coin flipping to select where should it take its input from.
Should the next function needs the same parameters it gets them from the source.
In forth it can get them from the stack.
I don't believe I had a choice doing it the way I finally did. Readability and least
code largely forced my hand - which is the way I like it. Having a choice one may
just as well flip a coin.
It is up to the programmer to select the appropriate criterion.The application has criteria. I'm not sure the programmer does.
On Tuesday, April 25, 2023 at 12:31:06 AM UTC-4, Ron AARON wrote:
...
Some reasons to enforce consistency include, but are not limited to:
- naming conventions can help you understand the purpose of a word or
variable without needing to refer to documentation
- when you return to the code after an extended absence, the above makes
the purpose of the code more clear
- you work with others on the same code-base; consistency means less
time trying to unravel others' code
- working with a set of rules means you spend less time working out
naming / spacing / commenting conventions, and just follow the script in
your head
- consistent spacing rules make the layout of the code easier to follow.
etc...
You are talking about being consistent in specific purposes. Naming conventions have some rationale.
My point is that I haven't heard anyone make a rationale regarding the issue of when to let data remain on the stack.
On 25/04/2023 4:10 pm, Lorem Ipsum wrote:
On Tuesday, April 25, 2023 at 12:31:06 AM UTC-4, Ron AARON wrote:
...
Some reasons to enforce consistency include, but are not limited to:
- naming conventions can help you understand the purpose of a word or
variable without needing to refer to documentation
- when you return to the code after an extended absence, the above makes >> the purpose of the code more clear
- you work with others on the same code-base; consistency means less
time trying to unravel others' code
- working with a set of rules means you spend less time working out
naming / spacing / commenting conventions, and just follow the script in >> your head
- consistent spacing rules make the layout of the code easier to follow. >>
etc...
You are talking about being consistent in specific purposes. Naming conventions have some rationale.
My point is that I haven't heard anyone make a rationale regarding the issue of when to let data remain on the stack.One 'rule' I frequently seem to run into trouble with is ordering of addresses
and counts - specifically CMOVE et al. Gut feeling is it should have been
( src len dest ) but I've never been motivated enough to do the work to prove/
disprove it. I figure that's a job for academics.
I did some work on an indexed stack machine instruction set. It could dig some small number of locations down into the stack, and I think it could index upward a bit too. I found it could eliminate a *lot* of stack instructions.
On Wednesday, April 26, 2023 at 8:56:00 PM UTC-4, dxforth wrote:Forth code runs on a register CPU, no?
On 25/04/2023 4:10 pm, Lorem Ipsum wrote:
On Tuesday, April 25, 2023 at 12:31:06 AM UTC-4, Ron AARON wrote:One 'rule' I frequently seem to run into trouble with is ordering of addresses
...
Some reasons to enforce consistency include, but are not limited to:
- naming conventions can help you understand the purpose of a word or
variable without needing to refer to documentation
- when you return to the code after an extended absence, the above makes >>>> the purpose of the code more clear
- you work with others on the same code-base; consistency means less
time trying to unravel others' code
- working with a set of rules means you spend less time working out
naming / spacing / commenting conventions, and just follow the script in >>>> your head
- consistent spacing rules make the layout of the code easier to follow. >>>>
etc...
You are talking about being consistent in specific purposes. Naming conventions have some rationale.
My point is that I haven't heard anyone make a rationale regarding the issue of when to let data remain on the stack.
and counts - specifically CMOVE et al. Gut feeling is it should have been
( src len dest ) but I've never been motivated enough to do the work to prove/
disprove it. I figure that's a job for academics.
I have nearly zero knowledge of optimizations, but it would seem to me this is the sort of thing that can be optimized by the tools. Write the code how you want, and let the tools arrange the assembly code to minimize stack juggling. In the end, most
I did some work on an indexed stack machine instruction set. It could dig some small number of locations down into the stack, and I think it could index upward a bit too. I found it could eliminate a *lot* of stack instructions. The test case was aninterrupt routine to manage a DDS NCO. I think there were three parameters on the stack and of course, the current phase value. I believe the cycle count dropped from 80 to 50 or something equivalent. I have no idea how to write an optimizing Forth
The application has criteria. I'm not sure the programmer does.
On Wednesday, April 26, 2023 at 4:59:16 AM UTC-5, dxforth wrote:
The application has criteria. I'm not sure the programmer does.
Your're beating a dead horse. It's simply a matter
of one's purpose. If your doing standards or need
crutches for clarity, you do conventions. If you
do CM style, word behavior is only fixed in the
context of the application where it's applied.
Lorem Ipsum schrieb am Donnerstag, 27. April 2023 um 03:41:33 UTC+2:
I did some work on an indexed stack machine instruction set. It could dig some small number of locationsIt's an old questionable thing. Simple demo case subtraction:
down into the stack, and I think it could index upward a bit too. I found it could eliminate a *lot* of stack
instructions.
: Mb ( a b -- diff b ) tuck - swap ;
With a word called UP (to step up the stack pointer):
: Mb ( a b -- diff b ) - up ;
Microscopic improvement and unportable.
On Thursday, April 27, 2023 at 11:12:09 AM UTC+2, minforth wrote:
Lorem Ipsum schrieb am Donnerstag, 27. April 2023 um 03:41:33 UTC+2:FORTH> : Mb ( a b -- diff b ) tuck - swap ;
I did some work on an indexed stack machine instruction set. It could dig some small number of locationsIt's an old questionable thing. Simple demo case subtraction:
down into the stack, and I think it could index upward a bit too. I found it could eliminate a *lot* of stack
instructions.
: Mb ( a b -- diff b ) tuck - swap ;
With a word called UP (to step up the stack pointer):
: Mb ( a b -- diff b ) - up ;
Microscopic improvement and unportable.
FORTH> ' Mb idis
$0133DC80 : Mb
$0133DC8A pop rbx
$0133DC8B pop rdi
$0133DC8C sub rdi, rbx
$0133DC8F push rdi
$0133DC90 push rbx
$0133DC91 ;
FORTH> : Mb2 ( a b -- diff b ) DUP >R - R> ; ' Mb2 idis
$01340540 : Mb2
$0134054A pop rbx
$0134054B pop rdi
$0134054C sub rdi, rbx
$0134054F push rdi
$01340550 push rbx
$01340551 ;
and
FORTH> : test 125 dup 25 Mb Mb2 . ; ok
FORTH> see test
Flags: ANSI
$013405C0 : test
$013405CA push #125 b#
$013405CC push #75 b#
$013405CE push #25 b#
$013405D0 jmp .+10 ( $0124A102 ) offset NEAR
$013405D5 ;
On Wednesday, April 26, 2023 at 4:59:16 AM UTC-5, dxforth wrote:
The application has criteria. I'm not sure the programmer does.Your're beating a dead horse. It's simply a matter
of one's purpose. If your doing standards or need
crutches for clarity, you do conventions. If you
do CM style, word behavior is only fixed in the
context of the application where it's applied.
Lorem Ipsum schrieb am Donnerstag, 27. April 2023 um 03:41:33 UTC+2:
I did some work on an indexed stack machine instruction set. It could dig some small number of locations down into the stack, and I think it could index upward a bit too. I found it could eliminate a *lot* of stack instructions.It's an old questionable thing. Simple demo case subtraction:
: Mb ( a b -- diff b ) tuck - swap ;
With a word called UP (to step up the stack pointer):
: Mb ( a b -- diff b ) - up ;
Microscopic improvement and unportable.
On Thursday, April 27, 2023 at 11:12:09 AM UTC+2, minforth wrote:
Lorem Ipsum schrieb am Donnerstag, 27. April 2023 um 03:41:33 UTC+2:FORTH> : Mb ( a b -- diff b ) tuck - swap ;
I did some work on an indexed stack machine instruction set. It could dig some small number of locationsIt's an old questionable thing. Simple demo case subtraction:
down into the stack, and I think it could index upward a bit too. I found it could eliminate a *lot* of stack
instructions.
: Mb ( a b -- diff b ) tuck - swap ;
With a word called UP (to step up the stack pointer):
: Mb ( a b -- diff b ) - up ;
Microscopic improvement and unportable.
FORTH> ' Mb idis
$0133DC80 : Mb
$0133DC8A pop rbx
$0133DC8B pop rdi
$0133DC8C sub rdi, rbx
$0133DC8F push rdi
$0133DC90 push rbx
$0133DC91 ;
FORTH> : Mb2 ( a b -- diff b ) DUP >R - R> ; ' Mb2 idis
$01340540 : Mb2
$0134054A pop rbx
$0134054B pop rdi
$0134054C sub rdi, rbx
$0134054F push rdi
$01340550 push rbx
$01340551 ;
and
FORTH> : test 125 dup 25 Mb Mb2 . ; ok
FORTH> see test
Flags: ANSI
$013405C0 : test
$013405CA push #125 b#
$013405CC push #75 b#
$013405CE push #25 b#
$013405D0 jmp .+10 ( $0124A102 ) offset NEAR
$013405D5 ;
-marcel
On 27/04/2023 11:41 am, Lorem Ipsum wrote:Forth code runs on a register CPU, no?
On Wednesday, April 26, 2023 at 8:56:00 PM UTC-4, dxforth wrote:
On 25/04/2023 4:10 pm, Lorem Ipsum wrote:
On Tuesday, April 25, 2023 at 12:31:06 AM UTC-4, Ron AARON wrote:One 'rule' I frequently seem to run into trouble with is ordering of addresses
...
Some reasons to enforce consistency include, but are not limited to: >>>>
- naming conventions can help you understand the purpose of a word or >>>> variable without needing to refer to documentation
- when you return to the code after an extended absence, the above makes
the purpose of the code more clear
- you work with others on the same code-base; consistency means less >>>> time trying to unravel others' code
- working with a set of rules means you spend less time working out >>>> naming / spacing / commenting conventions, and just follow the script in
your head
- consistent spacing rules make the layout of the code easier to follow.
etc...
You are talking about being consistent in specific purposes. Naming conventions have some rationale.
My point is that I haven't heard anyone make a rationale regarding the issue of when to let data remain on the stack.
and counts - specifically CMOVE et al. Gut feeling is it should have been >> ( src len dest ) but I've never been motivated enough to do the work to prove/
disprove it. I figure that's a job for academics.
I have nearly zero knowledge of optimizations, but it would seem to me this is the sort of thing that can be optimized by the tools. Write the code how you want, and let the tools arrange the assembly code to minimize stack juggling. In the end, most
AFAIK optimizers are still a long way from being able handle anything thrown at them.interrupt routine to manage a DDS NCO. I think there were three parameters on the stack and of course, the current phase value. I believe the cycle count dropped from 80 to 50 or something equivalent. I have no idea how to write an optimizing Forth
If a programmer is serious he'll want to improve his game; if he's not I doubt he
cares what comes out the end.
I did some work on an indexed stack machine instruction set. It could dig some small number of locations down into the stack, and I think it could index upward a bit too. I found it could eliminate a *lot* of stack instructions. The test case was an
On Thursday, April 27, 2023 at 8:47:51 AM UTC-4, Marcel Hendrix wrote:[..]
On Thursday, April 27, 2023 at 11:12:09 AM UTC+2, minforth wrote:
Lorem Ipsum schrieb am Donnerstag, 27. April 2023 um 03:41:33 UTC+2:
[..]FORTH> : test 125 dup 25 Mb Mb2 . ; ok
FORTH> see test
Flags: ANSI
$013405C0 : test
$013405CA push #125 b#
$013405CC push #75 b#
$013405CE push #25 b#
$013405D0 jmp .+10 ( $0124A102 ) offset NEAR
$013405D5 ;
I am by no means fluent with x86 assembly (unless this is ARM assembly
with I am even less fluent with), but why don't you keep TOS in a register? Is this faster? Four instructions to move data, with only one instruction performing useful work seems awkward, to say the least.
If you
do CM style, word behavior is only fixed in the=20
context of the application where it's applied.
One of us doesn't understand optimizers. My understanding is they have virtually nothing to do with the written code, and everything to do with the compiled code. Optimizers optimize the compiled code.optimizer can do its thing. I'm sure implementations combine the two to some degree, but this is not about how people code. It's about the code the compiler produces.
A lot of people can't get out of the mindset that a stack CPU is a Forth CPU. That's simply not true. Code for a stack CPU has to be generated from source code, the same way as for any other CPU. After the code generator does its thing, the
That's why I wrote, "I have no idea how to write an optimizing Forth compiler."
On 28/04/2023 1:46 am, Lorem Ipsum wrote:can do its thing. I'm sure implementations combine the two to some degree, but this is not about how people code. It's about the code the compiler produces.
One of us doesn't understand optimizers. My understanding is they have virtually nothing to do with the written code, and everything to do with the compiled code. Optimizers optimize the compiled code.
A lot of people can't get out of the mindset that a stack CPU is a Forth CPU. That's simply not true. Code for a stack CPU has to be generated from source code, the same way as for any other CPU. After the code generator does its thing, the optimizer
That's why I wrote, "I have no idea how to write an optimizing Forth compiler."One of us doesn't understand Forth. In Forth the programmer is the compiler/optimizer.
His role is to minimize stack operations so that the stack machine has less to do. The
idea that the programmer should be free to code in a way that pleases him (e.g. locals)
because there's a gcc-style compiler/optimizer under the hood misses the point of Forth.
On 28/04/2023 5:48 am, Lorem Ipsum wrote:optimizer can do its thing. I'm sure implementations combine the two to some degree, but this is not about how people code. It's about the code the compiler produces.
On Thursday, April 27, 2023 at 1:31:12 PM UTC-4, dxforth wrote:
On 28/04/2023 1:46 am, Lorem Ipsum wrote:
One of us doesn't understand optimizers. My understanding is they have virtually nothing to do with the written code, and everything to do with the compiled code. Optimizers optimize the compiled code.
A lot of people can't get out of the mindset that a stack CPU is a Forth CPU. That's simply not true. Code for a stack CPU has to be generated from source code, the same way as for any other CPU. After the code generator does its thing, the
One of us doesn't understand Forth. In Forth the programmer is the compiler/optimizer.
That's why I wrote, "I have no idea how to write an optimizing Forth compiler."
His role is to minimize stack operations so that the stack machine has less to do. The
idea that the programmer should be free to code in a way that pleases him (e.g. locals)
because there's a gcc-style compiler/optimizer under the hood misses the point of Forth.
I guess I didn't understand. So Forth compilers don't optimize code? You need to explain that to a few Forth compiler writers, that are doing it wrong.No, they don't attempt to optimize what the programmer wrote - at least not those that
try to remain true to Moore's view that the programmer needs to stay in control. If
I can't influence the code generated, what need is there to expose me to a stack and
operators. I may just as well use another language - one that hides all the internals.
On Thursday, April 27, 2023 at 1:31:12 PM UTC-4, dxforth wrote:can do its thing. I'm sure implementations combine the two to some degree, but this is not about how people code. It's about the code the compiler produces.
On 28/04/2023 1:46 am, Lorem Ipsum wrote:
One of us doesn't understand optimizers. My understanding is they have virtually nothing to do with the written code, and everything to do with the compiled code. Optimizers optimize the compiled code.
A lot of people can't get out of the mindset that a stack CPU is a Forth CPU. That's simply not true. Code for a stack CPU has to be generated from source code, the same way as for any other CPU. After the code generator does its thing, the optimizer
One of us doesn't understand Forth. In Forth the programmer is the compiler/optimizer.
That's why I wrote, "I have no idea how to write an optimizing Forth compiler."
His role is to minimize stack operations so that the stack machine has less to do. The
idea that the programmer should be free to code in a way that pleases him (e.g. locals)
because there's a gcc-style compiler/optimizer under the hood misses the point of Forth.
I guess I didn't understand. So Forth compilers don't optimize code? You need to explain that to a few Forth compiler writers, that are doing it wrong.
On Thursday, April 27, 2023 at 8:46:43 PM UTC-4, dxforth wrote:optimizer can do its thing. I'm sure implementations combine the two to some degree, but this is not about how people code. It's about the code the compiler produces.
On 28/04/2023 5:48 am, Lorem Ipsum wrote:
On Thursday, April 27, 2023 at 1:31:12 PM UTC-4, dxforth wrote:
On 28/04/2023 1:46 am, Lorem Ipsum wrote:
One of us doesn't understand optimizers. My understanding is they have virtually nothing to do with the written code, and everything to do with the compiled code. Optimizers optimize the compiled code.
A lot of people can't get out of the mindset that a stack CPU is a Forth CPU. That's simply not true. Code for a stack CPU has to be generated from source code, the same way as for any other CPU. After the code generator does its thing, the
implementing the functionality of the source code.No, they don't attempt to optimize what the programmer wrote - at least not those thatOne of us doesn't understand Forth. In Forth the programmer is the compiler/optimizer.
That's why I wrote, "I have no idea how to write an optimizing Forth compiler."
His role is to minimize stack operations so that the stack machine has less to do. The
idea that the programmer should be free to code in a way that pleases him (e.g. locals)
because there's a gcc-style compiler/optimizer under the hood misses the point of Forth.
I guess I didn't understand. So Forth compilers don't optimize code? You need to explain that to a few Forth compiler writers, that are doing it wrong.
try to remain true to Moore's view that the programmer needs to stay in control. If
I can't influence the code generated, what need is there to expose me to a stack and
operators. I may just as well use another language - one that hides all the internals.
Ok, feel free to not use optimizers. You can typically turn them off.
I think you objection is without basis. You seem to be objecting to the concept of optimization in compiled code. What sort of control are you looking for, that you lose with Forth? Every compiled language ignores the "control", other than
Try looking at an HDL. It's very hard to try to control the hardware produced. Rather, the functionality is "described" by the code, and the tool is free to implement this functionality as works best.
On 28/04/2023 10:56 am, Lorem Ipsum wrote:optimizer can do its thing. I'm sure implementations combine the two to some degree, but this is not about how people code. It's about the code the compiler produces.
On Thursday, April 27, 2023 at 8:46:43 PM UTC-4, dxforth wrote:
On 28/04/2023 5:48 am, Lorem Ipsum wrote:
On Thursday, April 27, 2023 at 1:31:12 PM UTC-4, dxforth wrote:
On 28/04/2023 1:46 am, Lorem Ipsum wrote:
One of us doesn't understand optimizers. My understanding is they have virtually nothing to do with the written code, and everything to do with the compiled code. Optimizers optimize the compiled code.
A lot of people can't get out of the mindset that a stack CPU is a Forth CPU. That's simply not true. Code for a stack CPU has to be generated from source code, the same way as for any other CPU. After the code generator does its thing, the
implementing the functionality of the source code.No, they don't attempt to optimize what the programmer wrote - at least not those thatOne of us doesn't understand Forth. In Forth the programmer is the compiler/optimizer.
That's why I wrote, "I have no idea how to write an optimizing Forth compiler."
His role is to minimize stack operations so that the stack machine has less to do. The
idea that the programmer should be free to code in a way that pleases him (e.g. locals)
because there's a gcc-style compiler/optimizer under the hood misses the point of Forth.
I guess I didn't understand. So Forth compilers don't optimize code? You need to explain that to a few Forth compiler writers, that are doing it wrong.
try to remain true to Moore's view that the programmer needs to stay in control. If
I can't influence the code generated, what need is there to expose me to a stack and
operators. I may just as well use another language - one that hides all the internals.
Ok, feel free to not use optimizers. You can typically turn them off.
I think you objection is without basis. You seem to be objecting to the concept of optimization in compiled code. What sort of control are you looking for, that you lose with Forth? Every compiled language ignores the "control", other than
Scattered throughout VFX code one finds code generator switches being enacted. That's
the Forth programmer intervening. This suggests Forth programmers aren't willing to
give optimizers free reign.
Try looking at an HDL. It's very hard to try to control the hardware produced. Rather, the functionality is "described" by the code, and the tool is free to implement this functionality as works best.I can't speak to any of that or how it came to be. What I try to do is see where Forth is
coming from so as not to inadvertently scuttle its 'raison d'etre'.
Scattered throughout VFX code one finds code generator switches being >enacted. That's the Forth programmer intervening. This suggests
Forth programmers aren't willing to give optimizers free reign.
In a standard stack CPU, an add deletes the input operands.
Rick C.--
In article <5a137452-9b42-4612...@googlegroups.com>,
Lorem Ipsum <gnuarm.del...@gmail.com> wrote:
In a standard stack CPU, an add deletes the input operands.In the prevailing instruction sets RISCV and ARM
the most common instruction is of the kind
R1 := R2 <op> R3
No, they don't attempt to optimize what the programmer wrote - at
least not those that try to remain true to Moore's view that the
programmer needs to stay in control. If I can't influence the code >generated, what need is there to expose me to a stack and operators.
One 'rule' I frequently seem to run into trouble with is ordering of addresses >and counts - specifically CMOVE et al. Gut feeling is it should have been
( src len dest )
dxforth <dxforth@gmail.com> writes:
Scattered throughout VFX code one finds code generator switches being
enacted. That's the Forth programmer intervening. This suggests
Forth programmers aren't willing to give optimizers free reign.
It suggests that the "optimizer" does not implement the language
variant that the code is written in. As an example, AFAIK return-address-manipulating code is incompatible with VFX's inlining.
So if you have code that manipulates return addresses, turning off
inlining is a (somewhat blunt) way to tell that to the compiler.
dxforth <dxforth@gmail.com> writes:
One 'rule' I frequently seem to run into trouble with is ordering of addresses
and counts - specifically CMOVE et al. Gut feeling is it should have been >> ( src len dest )
That suggests that you write the source string into the dest buffer no
matter how long the buffer is.
An alternative would be
( src srclen dest destlen -- )
where destlen tells how much space there is in the destination buffer.
But what to do if srclen>destlen? Exception? Or silently only copy
destlen chars to dest?
I don't know that I've ever programed anything, while giving thought to the language's 'raison d'etre'. Nope, I went through my notes, and that's just not part of the specifications for the code.
I don't know what you are trying to say at this point. You seem to be a leaf in the wind. Since there's no coherent thoughts being expressed, I guess we are done.
On 28/04/2023 5:05 pm, Lorem Ipsum wrote:
I don't know that I've ever programed anything, while giving thought to the language's 'raison d'etre'. Nope, I went through my notes, and that's just not part of the specifications for the code.
I don't know what you are trying to say at this point. You seem to be a leaf in the wind. Since there's no coherent thoughts being expressed, I guess we are done.Before you go you might explain what is your rationale for using Forth over other languages?
It can't be that you accept Moore's reasons for it, because clearly you don't.
An alternative would be
( src srclen dest destlen -- )
where destlen tells how much space there is in the destination buffer.
But what to do if srclen>destlen? Exception? Or silently only copy
destlen chars to dest?
It's a complication I've thus far not needed. Should it arise I imagine
I'd do the test first and then pass to a primitive move as necessary.
dxforth <dxforth@gmail.com> writes:
An alternative would be
( src srclen dest destlen -- )
where destlen tells how much space there is in the destination buffer.
But what to do if srclen>destlen? Exception? Or silently only copy
destlen chars to dest?
It's a complication I've thus far not needed. Should it arise I imagine
I'd do the test first and then pass to a primitive move as necessary.
Right. And what would that primitive move look like, say, for the
truncating variant:
: safemove ( c-addr1 u1 c-addr2 u2 -- )
rot umin move ;
So maybe Moore had this in mind when he designed CMOVE with the stack
effect ( from to count -- ).
dxforth <dxforth@gmail.com> writes:
On 29/04/2023 3:24 pm, Anton Ertl wrote:
A frequency analysis would likely show SWAP MOVE outnumbered MIN MOVE.
Looking at the data from <https://www.complang.tuwien.ac.at/forth/peep/sorted>:
sum cross gs prims2x
[a4:~/pub/forth/peep:91748] awk 'NF==6 && $6=="move" {print}' sorted
7825 7648 72 105 r> move
3472 1898 561 1013 ;s move
838 0 191 647 count move
579 150 225 204 rot move
70 15 49 6 -rot move
15 0 15 0 cells move
1 1 0 0 execute move
[a4:~/pub/forth/peep:91749] awk 'NF==6 && $6=="cmove" {print}' sorted
158 28 100 30 swap cmove
61 4 36 21 r@ cmove
15 0 15 0 cells cmove
The numbers are dynamic execution counts. So we have no "SWAP MOVE"
and no "MIN MOVE" in these runs, but we have some "SWAP CMOVE". OTOH,
we also have ";S MOVE", "COUNT MOVE", "CELLS MOVE", "EXECUTE MOVE",
"CELLS CMOVE", which would probably require an additional SWAP if the
order was ( from count to -- ); for the cases where MOVE and CMOVE are preceeded by stack juggling words, I guess one could use the other
order, and on average the code would have about the same amount of
stack juggling.
The existence of "COUNT MOVE" is particularly interesting: You have a
counted string in the TO buffer, and want to overwrite all its
characters by characters from the FROM string. Looking for a longer sequence, I see:
838 0 191 647 ;s var: c! var: count move
838 0 191 647 count move lit lit ! ;s
What is ;S MOVE
I looked in my gforth dir but could find no matches.
There does
seem to be a great number of R> MOVE which would favour length last
syntax.
On 29/04/2023 3:24 pm, Anton Ertl wrote:
A frequency analysis would likely show SWAP MOVE outnumbered MIN MOVE.
On Friday, April 28, 2023 at 11:25:46 PM UTC-4, dxforth wrote:
On 28/04/2023 5:05 pm, Lorem Ipsum wrote:
Before you go you might explain what is your rationale for using Forth over other languages?
I don't know that I've ever programed anything, while giving thought to the language's 'raison d'etre'. Nope, I went through my notes, and that's just not part of the specifications for the code.
I don't know what you are trying to say at this point. You seem to be a leaf in the wind. Since there's no coherent thoughts being expressed, I guess we are done.
It can't be that you accept Moore's reasons for it, because clearly you don't.
I like it. I find it easier to use than languages that require the more complicated software and hardware. I barely remember how to load code into a target from a C compiler.
What are you looking for? You don't really seem interested in having much of a conversation. It's like you are always maneuvering to find an advantage point.
Is there something you wish to discuss?
dxforth <dxforth@gmail.com> writes:
What is ;S MOVE
I looked in my gforth dir but could find no matches.
;S is the primitive compiled by EXIT and ";". In the source code you
would see a MOVE preceded by a call to a colon definition or
DOES>-defined word.
There does
seem to be a great number of R> MOVE which would favour length last
syntax.
Looking at a longer sequence:
7825 7648 72 105 >r rot over 1+ r> move
All the dynamic invocations seem to be due to one occurence in the
source code:
: place ( addr len to -- ) \ gforth
over >r rot over 1+ r> move c! ;
Let's see how we can write that with SWAPMOVE:
: place ( addr len to -- ) \ gforth
2dup 2>r 1+ swapmove 2r> c! ;
And if it's ok to corrupt the destination string in case of an
overlap (not the case for the proposed PLACE):
: place ( addr len to -- ) \ gforth
2dup c! 1+ swapmove ;
dxforth <dxforth@gmail.com> writes:
An alternative would be
( src srclen dest destlen -- )
where destlen tells how much space there is in the destination buffer. >>>> But what to do if srclen>destlen? Exception? Or silently only copy
destlen chars to dest?
It's a complication I've thus far not needed. Should it arise I imagine >>> I'd do the test first and then pass to a primitive move as necessary.
Right. And what would that primitive move look like, say, for the
truncating variant:
: safemove ( c-addr1 u1 c-addr2 u2 -- )
rot umin move ;
So maybe Moore had this in mind when he designed CMOVE with the stack
effect ( from to count -- ).
A frequency analysis would likely show SWAP MOVE outnumbered MIN MOVE.
dxforth <dxforth@gmail.com> writes:
One 'rule' I frequently seem to run into trouble with is ordering of addresses
and counts - specifically CMOVE et al. Gut feeling is it should have been >>( src len dest )
That suggests that you write the source string into the dest buffer no
matter how long the buffer is.
An alternative would be
( src srclen dest destlen -- )
where destlen tells how much space there is in the destination buffer.
But what to do if srclen>destlen? Exception? Or silently only copy
destlen chars to dest?
- anton
On 29/04/2023 2:47 pm, Lorem Ipsum wrote:
On Friday, April 28, 2023 at 11:25:46 PM UTC-4, dxforth wrote:
On 28/04/2023 5:05 pm, Lorem Ipsum wrote:
Before you go you might explain what is your rationale for using Forth over other languages?
I don't know that I've ever programed anything, while giving thought to the language's 'raison d'etre'. Nope, I went through my notes, and that's just not part of the specifications for the code.
I don't know what you are trying to say at this point. You seem to be a leaf in the wind. Since there's no coherent thoughts being expressed, I guess we are done.
It can't be that you accept Moore's reasons for it, because clearly you don't.
I like it. I find it easier to use than languages that require the more complicated software and hardware. I barely remember how to load code into a target from a C compiler.
What are you looking for? You don't really seem interested in having much of a conversation. It's like you are always maneuvering to find an advantage point.From what I've seen you're no stranger to asking probing questions. When it goes nowhere
for you, you end it just as you did above.
Is there something you wish to discuss?'like' doesn't offer much scope for discussion, so no.
In article <u2ihpq$2rlsa$1@dont-email.me>, dxforth <dxforth@gmail.com> wrote:
...Reality check.
A frequency analysis would likely show SWAP MOVE outnumbered MIN MOVE.
Of the 61 uses of MOVE in my forth library, only 2 'SWAP MOVE' are present.
In article <u2ihpq$2rlsa$1@dont-email.me>, dxforth <dxforth@gmail.com> wrote:
...Reality check.
A frequency analysis would likely show SWAP MOVE outnumbered MIN MOVE.
Of the 61 uses of MOVE in my forth library, only 2 'SWAP MOVE' are present.
So SWAP MOVE did outnumber MIN MOVE ?
dxforth <dxf...@gmail.com> writes:[..]
On 29/04/2023 3:24 pm, Anton Ertl wrote:
A frequency analysis would likely show SWAP MOVE outnumbered MIN MOVE. Looking at the data from <https://www.complang.tuwien.ac.at/forth/peep/sorted>:
sum cross gs prims2x
[a4:~/pub/forth/peep:91748] awk 'NF==6 && $6=="move" {print}' sorted
7825 7648 72 105 r> move
3472 1898 561 1013 ;s move
838 0 191 647 count move
579 150 225 204 rot move
70 15 49 6 -rot move
15 0 15 0 cells move
1 1 0 0 execute move
[a4:~/pub/forth/peep:91749] awk 'NF==6 && $6=="cmove" {print}' sorted
158 28 100 30 swap cmove
61 4 36 21 r@ cmove
15 0 15 0 cells cmove
directories: examples / include / meta
Searched files: 6701 1182 55 -------------------------------------------------
Searching for: SWAP CMOVE 55 17 2
Searching for: ROT CMOVE 11 1 0
Searching for: MIN CMOVE 4 4 2
Searching for: R> CMOVE 4 0 0
Searching for: CELLS MOVE 83 7 1
Searching for: SWAP MOVE 48 20 6
Searching for: ROT MOVE 4 0 0
Searching for: R> MOVE 3 1 0
Searching for: MAX MOVE 2 2 0
On Saturday, April 29, 2023 at 12:19:25=E2=80=AFPM UTC+2, Anton Ertl wrote:
dxforth <dxf...@gmail.com> writes:[..]
On 29/04/2023 3:24 pm, Anton Ertl wrote:Looking at the data from
A frequency analysis would likely show SWAP MOVE outnumbered MIN MOVE.
<https://www.complang.tuwien.ac.at/forth/peep/sorted>:
sum cross gs prims2x
[a4:~/pub/forth/peep:91748] awk 'NF=3D=3D6 && $6=3D=3D"move" {print}' sor= >ted
7825 7648 72 105 r> move
3472 1898 561 1013 ;s move
838 0 191 647 count move
579 150 225 204 rot move
70 15 49 6 -rot move
15 0 15 0 cells move
1 1 0 0 execute move
[a4:~/pub/forth/peep:91749] awk 'NF=3D=3D6 && $6=3D=3D"cmove" {print}' so= >rted
158 28 100 30 swap cmove
61 4 36 21 r@ cmove
15 0 15 0 cells cmove
A stunning amount of occurences...
E.g. many times I need the target address after PLACE.
Another consideration is how often an item is reused after an operation.
E.g. many times I need the target address after PLACE. It may be my way
of programming - and when I introduced PLACE I seriously considered it. Eventually choosing compatibility over usability.
On 3/05/2023 9:19 pm, Hans Bezemer wrote:
Another consideration is how often an item is reused after an operation. E.g. many times I need the target address after PLACE. It may be my waySome forths had PACK. As my kernel used it three times vs. one for PLACE,
of programming - and when I introduced PLACE I seriously considered it. Eventually choosing compatibility over usability.
I opted to provide both.
For situations like that, with a non-optimizing compiler, I have DUP>R=20 >which is one instruction on my machine.
I see that SwiftForth has DUP>R as well.=20
Is DUP>R common in other systems?
On Wednesday, May 3, 2023 at 7:19:55 AM UTC-4, Hans Bezemer wrote:
E.g. many times I need the target address after PLACE.
For situations like that, with a non-optimizing compiler, I have DUP>R
which is one instruction on my machine.
I see that SwiftForth has DUP>R as well.
Is DUP>R common in other systems?
On 3/05/2023 9:19 pm, Hans Bezemer wrote:
Another consideration is how often an item is reused after an operation.
E.g. many times I need the target address after PLACE. It may be my way
of programming - and when I introduced PLACE I seriously considered it.
Eventually choosing compatibility over usability.
Some forths had PACK. As my kernel used it three times vs. one for PLACE,
I opted to provide both.
On Wednesday, May 3, 2023 at 7:19:55 AM UTC-4, Hans Bezemer wrote:
E.g. many times I need the target address after PLACE.
For situations like that, with a non-optimizing compiler, I have DUP>R
which is one instruction on my machine.
I see that SwiftForth has DUP>R as well.
Is DUP>R common in other systems?
On 4 May 2023 at 05:34:20 CEST, "dxforth" <dxforth@gmail.com> wrote:
On 3/05/2023 9:19 pm, Hans Bezemer wrote:
Another consideration is how often an item is reused after an operation. >>> E.g. many times I need the target address after PLACE. It may be my way
of programming - and when I introduced PLACE I seriously considered it.
Eventually choosing compatibility over usability.
Some forths had PACK. As my kernel used it three times vs. one for PLACE, >> I opted to provide both.
What is PACK ?
On 4 May 2023 at 05:34:20 CEST, "dxforth" <dxforth@gmail.com> wrote:
On 3/05/2023 9:19 pm, Hans Bezemer wrote:
Another consideration is how often an item is reused after an operation. >>>> E.g. many times I need the target address after PLACE. It may be my way >>>> of programming - and when I introduced PLACE I seriously considered it. >>>> Eventually choosing compatibility over usability.
Some forths had PACK. As my kernel used it three times vs. one for PLACE, >>> I opted to provide both.
What is PACK ?
Same behaviour as PLACE but leaves the destination address.
In article <u2vuep$1pcrl$1@dont-email.me>, dxforth <dxforth@gmail.com> wrote:
On 4/05/2023 6:29 pm, Stephen Pelc wrote:
On 4 May 2023 at 05:34:20 CEST, "dxforth" <dxforth@gmail.com> wrote:
On 3/05/2023 9:19 pm, Hans Bezemer wrote:
Another consideration is how often an item is reused after an operation. >>>>> E.g. many times I need the target address after PLACE. It may be my way >>>>> of programming - and when I introduced PLACE I seriously considered it. >>>>> Eventually choosing compatibility over usability.
Some forths had PACK. As my kernel used it three times vs. one for PLACE, >>>> I opted to provide both.
What is PACK ?
Same behaviour as PLACE but leaves the destination address.
It is a mystery why those same people doesn't invent a !
that leaves the destination address.
On 03/05/2023 20:19, Brian Fox wrote:
On Wednesday, May 3, 2023 at 7:19:55 AM UTC-4, Hans Bezemer wrote:
E.g. many times I need the target address after PLACE.
For situations like that, with a non-optimizing compiler, I have DUP>R
which is one instruction on my machine.
I see that SwiftForth has DUP>R as well.
Is DUP>R common in other systems?
Don't know about common, but 8th has had dup>r for quite a while.
I have RDROP ... mainly because it has the same run-time as UNNEST
which I already had. The cost was but a header.
On 03/05/2023 20:19, Brian Fox wrote:
On Wednesday, May 3, 2023 at 7:19:55 AM UTC-4, Hans Bezemer wrote:
E.g. many times I need the target address after PLACE.
For situations like that, with a non-optimizing compiler, I have DUP>R
which is one instruction on my machine.
I see that SwiftForth has DUP>R as well.
Is DUP>R common in other systems?
Don't know about common, but 8th has had dup>r for quite a while.
In article <u309nq$1qree$1@dont-email.me>, dxforth <dxforth@gmail.com> wrote:
On 4/05/2023 7:49 pm, Ron AARON wrote:
On 03/05/2023 20:19, Brian Fox wrote:
On Wednesday, May 3, 2023 at 7:19:55 AM UTC-4, Hans Bezemer wrote:
E.g. many times I need the target address after PLACE.
For situations like that, with a non-optimizing compiler, I have DUP>R >>>> which is one instruction on my machine.
I see that SwiftForth has DUP>R as well.
Is DUP>R common in other systems?
Don't know about common, but 8th has had dup>r for quite a while.
I have RDROP ... mainly because it has the same run-time as UNNEST
which I already had. The cost was but a header.
Suddenly I realise that RDROP has right to exist.
It is not an abbreviation of R> DROP . That is silly,
copy it first to the data stack, then drop.
So RDROP is an atomic action like >R and unlike DUP>R .
Suddenly I realise that RDROP has right to exist.
It is not an abbreviation of R> DROP . That is silly,
copy it first to the data stack, then drop.
So RDROP is an atomic action like >R and unlike DUP>R .
albert@cherry.(none) (albert) writes:
Suddenly I realise that RDROP has right to exist.What about R@? Do you think it has a right to exist, or should we
It is not an abbreviation of R> DROP . That is silly,
copy it first to the data stack, then drop.
So RDROP is an atomic action like >R and unlike DUP>R .
rather introduce RDUP so that we can then do RDUP R>? DUP >R is the
dual of RDUP R>, with the roles of the stacks swapped.
In article <u2vuep$1pcrl$1...@dont-email.me>, dxforth <dxf...@gmail.com> wrote:Because those people rarely encounter a number that has to be appended to the previous number like +PLACE.
On 4/05/2023 6:29 pm, Stephen Pelc wrote:
On 4 May 2023 at 05:34:20 CEST, "dxforth" <dxf...@gmail.com> wrote:
On 3/05/2023 9:19 pm, Hans Bezemer wrote:
Another consideration is how often an item is reused after an operation.
E.g. many times I need the target address after PLACE. It may be my way >>>> of programming - and when I introduced PLACE I seriously considered it. >>>> Eventually choosing compatibility over usability.
Some forths had PACK. As my kernel used it three times vs. one for PLACE,
I opted to provide both.
What is PACK ?
Same behaviour as PLACE but leaves the destination address.It is a mystery why those same people doesn't invent a !
that leaves the destination address.
Groetjes Albert
--
Don't praise the day before the evening. One swallow doesn't make spring. You must not say "hey" before you have crossed the bridge. Don't sell the hide of the bear until you shot it. Better one bird in the hand than ten in the air. First gain is a cat spinning. - the Wise from Antrim -
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 49:53:05 |
Calls: | 6,711 |
Calls today: | 4 |
Files: | 12,243 |
Messages: | 5,354,784 |
Posted today: | 1 |