• #### Order of transformation of values

From James Harris@21:1/5 to All on Mon Nov 8 11:21:45 2021
Here's a postulation which doesn't need a reply but you might find it interesting.

I /suggest/ that in expressions as used in programming languages there
is a natural sequence in which values are transformed by the operators -
and that most languages don't follow it!

The sequence is:

2. Bit patterns
3. Numbers
4. Booleans

As mentioned in a recent thread the operators which work with addresses
have to be applied first. That's because while addresses can always be dereferenced to obtain what's at those addresses it's not meaningful to
go from a value to where it is stored. (The value produced so far is, in
fact, often in a register or on the top of the stack; either way, its
location is meaningless.)

The controversial part of the above is where I've put bit patterns so
let me come back to those below.

Going to the other end of the list first, the boolean operators (AND,
OR, NOT etc) don't just consume but also produce booleans. Thus once one
gets to boolean values the natural choice for how to combine them will
only produce more booleans. Therefore booleans are at the end.

Working upwards, boolean values are typically produced by comparisons.
The equality comparisons (== and !=) will work with anything; however
the relative comparisons (<, <=, >=, >) don't make much sense on bit
patterns but they do apply well to numbers. That's why I've put numbers immediately above booleans.

That results, so far, in

* Addresses (manipulated by array lookup, field selection etc)
* Numbers (compared with <, etc; manipulated by +, etc)
* Booleans (manipulated by AND, OR, etc)

Where, though do bit patterns (as processed by bitwise operators) fit in?

One could make a case for simply not allowing bit patterns to be
combined with numbers. For example,

a + b & c

would be prohibited for having + and & adjacent to each other, there
being no precedence between them.

There would then be two separate streams of transformation:

addresses --> bit patterns --> booleans

IOW numbers and bit patterns could be read from store and compared to
produce booleans but numbers and bit patterns could not directly be
combined with each other (except in shifts where the RH operand is a
number).

A second option is to blur the distinction between bit patterns and
numbers and to intermix operations. (The C approach?)

A third is to insert bit patterns between numbers and booleans:

addresses --> numbers --> bit patterns --> booleans

but it makes little sense to go from numbers (for which all comparisons
are meaningful) to bit patterns (for which only some comparisons are meaningful) and then to apply comparison operators to them.

A fourth and final option is

addresses --> bit patterns --> numbers --> booleans

In that, the values read from addresses would be regarded initially as
bit patterns. Then, once the bit patterns have been manipulated (if at
all) the results would be treated as numbers. Then the numbers could be
subject to the full range of comparison operations and booleans would be produced.

ATM I follow the latter (fourth) approach. My operators are therefore in
order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)

As an example, in

if A * B & Mask > C

the bitwise operator would be evaluated first resulting in

if A * (B & Mask) > C

then the arithmetic operator to give

if (A * (B & Mask)) > C

I should say that there are precedences within each group. In the
Arithmetic group * is applied before +, for example. But the groups are
applied in the order stated; all bit-pattern operators are applied
before any arithmetic operators, for instance.

As a side benefit, AISI having the operators in groups makes it easier
for a programmer to remember the precedences.

As I say, no need to reply. The above is just my rationale for ordering precedences as I have them.

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Dmitry A. Kazakov@21:1/5 to James Harris on Mon Nov 8 13:29:36 2021
On 2021-11-08 12:21, James Harris wrote:

ATM I follow the latter (fourth) approach. My operators are therefore in order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)

Makes little to no sense.

The precedence rules usually distinguish unary and dyadic operators,
because that is what the reader expects.

The unary operators almost always have a higher precedence. Which is why
your rules make no sense in the first place:

A + not B

is never

not (A + B)

and

A * -B

is never

- (A * B)

And

- not A

is never

not (-A)

should "not" really had lower precedence than "-".

"not" and "-" have same precedence.

[Mixing prefix and suffix unary operators is fun!]

The exception from the rule are dyadic meta-operators like member
extraction etc. They have the highest precedence (and are "meta" because
some operands are not expressions, normally).

not A.B

means

not (A.B)

While

A.not B

is illegal, because here "not B" is an expression.

Then "+" and "*" never have same precedence in any sane language.

The natural precedence used by sane languages:

1. Namespace, member extraction A.B[C] means "[]"("."(A,B),C)
2. Indexing
3. Unary operators, prefix are right-to-left, suffix are left-to-right
4.1. Exponentiation (**)
4.2. Multiplicative (*, /)
4.4. Comparisons (<, >, =)
4.5. Lattice (and, or, xor)

4.1 would conflict with 3 for most readers with background in mathematics:

-A**B

So 4.1 is almost like 3.

If you wanted assignment operator, it should have asymmetric precedence:

A + B := C + D

is expected to be

A + (B := (C + D))

Same is true for exponentiation operator. Most people would read

A**B**C

as

A**(B**C)

Splitting lattice operations into logical and bit-wise is controversial.
You would need two sets of and/or/not operators.

If you want to mix "and" with "or" as C allows for && and ||, then "and"
is an equivalent of "*" and "or" is of "+". Compare:

x * 0 = 0 x and 0 = 0
x + 0 = x x or 0 = x

So you could put bit-wise "and" into multiplicative and "or" into
additive operators. But as I said it is rather controversial. I would
just forbid mixing arithmetic and lattice operators.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Bart@21:1/5 to James Harris on Mon Nov 8 13:57:29 2021
On 08/11/2021 11:21, James Harris wrote:
Here's a postulation which doesn't need a reply but you might find it interesting.

I /suggest/ that in expressions as used in programming languages there
is a natural sequence in which values are transformed by the operators -
and that most languages don't follow it!

The sequence is:

2. Bit patterns
3. Numbers
4. Boolean
...

As I say, no need to reply. The above is just my rationale for ordering precedences as I have them.

You're precedences should depend on type?

A few problems there:

* When parsing code, you might not know the types of things until later.
This makes it harder to create the right shape of AST

* In dynamic languages, you probably won't know the types of things
until runtime. And then, the types for the /same/ expression may be
different each time it's executed

* Operators may also be applied between types not in your list, and
arbitrary user-types.

So I think that type should not play a part in this.

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Bart on Mon Nov 8 14:17:52 2021
On 08/11/2021 13:57, Bart wrote:
On 08/11/2021 11:21, James Harris wrote:
Here's a postulation which doesn't need a reply but you might find it
interesting.

I /suggest/ that in expressions as used in programming languages there
is a natural sequence in which values are transformed by the operators
- and that most languages don't follow it!

The sequence is:

2. Bit patterns
3. Numbers
4. Boolean
...

As I say, no need to reply. The above is just my rationale for
ordering precedences as I have them.

You're precedences should depend on type?

No! That would be a nightmare - as you go on to suggest (now snipped).

Imagine that all the identifiers in the following have been declared as
uint 64, except E which is a record but E.F is a field which is also
uint 64.

A and B > C + D & E.F

That shows one operator from each group but they still work on uint 64s.

...

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Mon Nov 8 15:39:49 2021
On 08/11/2021 12:29, Dmitry A. Kazakov wrote:
On 2021-11-08 12:21, James Harris wrote:

ATM I follow the latter (fourth) approach. My operators are therefore
in order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)

Makes little to no sense.

;-)

The precedence rules usually distinguish unary and dyadic operators,
because that is what the reader expects.

The unary operators almost always have a higher precedence.

"Almost always"?!

Which is why
your rules make no sense in the first place:

A + not B

is never

not (A + B)

and

A * -B

is never

- (A * B)

And

- not A

is never

not (-A)

Those are strange examples. You say they'd never match but I wouldn't
expect them to.

should "not" really had lower precedence than "-".

Well, think of

not A > B
not A <= - F()

"not" and "-" have same precedence.

[Mixing prefix and suffix unary operators is fun!]

It can be. :-(

The exception from the rule are dyadic meta-operators like member
extraction etc. They have the highest precedence (and are "meta" because
some operands are not expressions, normally).

Exceptions to rules and special 'meta' forms? Has your account been
hijacked, Dmitry? I remember you calling Bart's parsing a "mess" for
similar.

not A.B

means

not (A.B)

While

A.not B

is illegal, because here "not B" is an expression.

Then "+" and "*" never have same precedence in any sane language.

The natural precedence used by sane languages:

1. Namespace, member extraction A.B[C]  means "[]"("."(A,B),C)
2. Indexing

Why split namespace and indexing when the operations can be mixed
freely? E.g.

X[i].field.subfield[j].datum

3. Unary operators, prefix are right-to-left, suffix are left-to-right

Are you saying you would put logical NOT in there because it's unary?

4.1. Exponentiation (**)
4.2. Multiplicative (*, /)
4.4. Comparisons (<, >, =)
4.5. Lattice (and, or, xor)

Most of those are standard. I have

^ exponentiation
*/ times, divide, remainder etc
... all the comparison operators with one precedence
... all the boolean operators with their own precedences

4.1 would conflict with 3 for most readers with background in mathematics:

-A**B

I would parse that 'correctly' as

- (A ^ B)

(using ^ for exponentiation) because ^ has higher precedence than prefix
unary minus. Your mathematicians should be happy. :-)

So 4.1 is almost like 3.

If you wanted assignment operator, it should have asymmetric precedence:

A + B := C + D

is expected to be

A + (B := (C + D))

Same is true for exponentiation operator. Most people would read

A**B**C

as

A**(B**C)

That's how I would parse it - right to left. I think that exponentiation
is the only operator in my entire table which is right-to-left.

Splitting lattice operations into logical and bit-wise is controversial.
You would need two sets of and/or/not operators.

That's no problem. I use symbols for bitwise and words for booleans. For consistency they both follow the same order: not, and, xor, or.

In order of application they would be:

Bitwise:
<< >> shifts
! bitnot
& bitand
% bitxor
# bitor

Then, later:

Boolean
not logical not
and logical and (shortcutting)
xor logical xor
or logical or (shortcutting)

If you want to mix "and" with "or" as C allows for && and ||, then "and"
is an equivalent of "*" and "or" is of "+". Compare:

x * 0 = 0    x and 0 = 0
x + 0 = x    x or  0 = x

So you could put bit-wise "and" into multiplicative and "or" into

Sounds as though you and Bart are on the same page on taking a
structural approach to precedences rather than a semantic one.

But as I said it is rather controversial. I would
just forbid mixing arithmetic and lattice operators.

Fine. Though what would a programmer do if he had a genuine need to
'mix' them?

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Bart@21:1/5 to Dmitry A. Kazakov on Mon Nov 8 16:58:57 2021
On 08/11/2021 16:37, Dmitry A. Kazakov wrote:
On 2021-11-08 16:39, James Harris wrote:

(using ^ for exponentiation) because ^ has higher precedence than
prefix unary minus. Your mathematicians should be happy. :-)

Traditionally in mathematics circumflex (hat) means something like
"vector".

Unit-vectors IIRC. But that was applied over the variable.

Using it for exponentiation was one of so many C's blunders.

C did make many blunders but this wasn't one of them. (It doesn't have
an exponentiaton operator, so avoided making one more!)

Perhaps ^ was supposed to look like an up-arrow:

https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation

But I don't know where using it for power-of originated. I use ^ to mean pointer dereference, taken from Pascal. And ** for exponentiation,
probably taken from Fortran.

C uses ^ for bitwise XOR.

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Dmitry A. Kazakov@21:1/5 to James Harris on Mon Nov 8 17:37:27 2021
On 2021-11-08 16:39, James Harris wrote:
On 08/11/2021 12:29, Dmitry A. Kazakov wrote:
On 2021-11-08 12:21, James Harris wrote:

ATM I follow the latter (fourth) approach. My operators are therefore
in order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)

Makes little to no sense.

;-)

I am too! (:-))

The precedence rules usually distinguish unary and dyadic operators,
because that is what the reader expects.

The unary operators almost always have a higher precedence.

"Almost always"?!

Yes, with some exceptions.

Which is why your rules make no sense in the first place:

A + not B

is never

not (A + B)

and

A * -B

is never

- (A * B)

And

- not A

is never

not (-A)

Those are strange examples. You say they'd never match but I wouldn't
expect them to.

These are examples why unary operation should have higher precedence.

should "not" really had lower precedence than "-".

Well, think of

not A > B
not A <= - F()

Even when not is bit-wise?

not A + B * C + D

And you have the problem of mixing unary operators having different
precedence:

- not A

E.g. in

- not A + B * C + D

It is inconsistent.

The exception from the rule are dyadic meta-operators like member
extraction etc. They have the highest precedence (and are "meta"
because some operands are not expressions, normally).

Exceptions to rules and special 'meta' forms? Has your account been
hijacked, Dmitry? I remember you calling Bart's parsing a "mess" for
similar.

Not similar, purely mathematically, it is sort of recursion into what
you treat as an expression to resolve. It must stop at some point. In
most languages it stops in the second argument of A.B. If you want a
dynamic member resolution you would use A."B" but still keep the rules same.

The natural precedence used by sane languages:

1. Namespace, member extraction A.B[C]  means "[]"("."(A,B),C)
2. Indexing

Why split namespace and indexing when the operations can be mixed
freely? E.g.

X[i].field.subfield[j].datum

Compare:

field.subfield[j] ---> (field.subfield)[j]

with

field+subfield[j] ---> field + (subfield[j])

3. Unary operators, prefix are right-to-left, suffix are left-to-right

Are you saying you would put logical NOT in there because it's unary?

Right, see above.

4.1. Exponentiation (**)
4.2. Multiplicative (*, /)
4.4. Comparisons (<, >, =)
4.5. Lattice (and, or, xor)

Most of those are standard. I have

^    exponentiation
*/   times, divide, remainder etc
...  all the comparison operators with one precedence
...  all the boolean operators with their own precedences

4.1 would conflict with 3 for most readers with background in
mathematics:

-A**B

I would parse that 'correctly' as

- (A ^ B)

To me it is not obvious, why not

(-A) ^ B?

And the counter example is this:

A ^ -B

If ^ preceded -, then that should too become

-(A ^ B)

My choice would be allow

A ^ -B ---> A ^ (-B)
-A ^ B ---> Syntax error, give me parenthesis.

So, unary always precedes dyadic, except the metas.

(using ^ for exponentiation) because ^ has higher precedence than prefix unary minus. Your mathematicians should be happy. :-)

Traditionally in mathematics circumflex (hat) means something like
"vector". Using it for exponentiation was one of so many C's blunders.

So 4.1 is almost like 3.

If you wanted assignment operator, it should have asymmetric precedence:

A + B := C + D

is expected to be

A + (B := (C + D))

Same is true for exponentiation operator. Most people would read

A**B**C

as

A**(B**C)

That's how I would parse it - right to left. I think that exponentiation
is the only operator in my entire table which is right-to-left.

You could have pipelining operators:

"Buddy" >> Wide_Space >> "Hello" >> Stream

Also all prefix operators are right-to-left. Consider this:

* not 0x000FF0

(machine address inverted and then accessed)

Do you really want it to mutate into:

not (*X)

?

Splitting lattice operations into logical and bit-wise is
controversial. You would need two sets of and/or/not operators.

That's no problem. I use symbols for bitwise and words for booleans. For consistency they both follow the same order: not, and, xor, or.

In order of application they would be:

Bitwise:
<< >>   shifts
!       bitnot
&       bitand
%       bitxor
#       bitor

Then, later:

Boolean
not     logical not
and     logical and  (shortcutting)
xor     logical xor
or      logical or   (shortcutting)

If you want to mix "and" with "or" as C allows for && and ||, then
"and" is an equivalent of "*" and "or" is of "+". Compare:

x * 0 = 0    x and 0 = 0
x + 0 = x    x or  0 = x

So you could put bit-wise "and" into multiplicative and "or" into

Sounds as though you and Bart are on the same page on taking a
structural approach to precedences rather than a semantic one.

Yes, but for different reasons. In a higher level language syntax can
imply nothing about semantics. So, rather than provoking reader to wrong conclusion, just define syntax completely separate of.

But as I said it is rather controversial. I would just forbid mixing
arithmetic and lattice operators.

Fine. Though what would a programmer do if he had a genuine need to
'mix' them?

Require disambiguation per parenthesis.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Dmitry A. Kazakov@21:1/5 to Bart on Mon Nov 8 18:36:34 2021
On 2021-11-08 17:58, Bart wrote:
On 08/11/2021 16:37, Dmitry A. Kazakov wrote:
On 2021-11-08 16:39, James Harris wrote:

(using ^ for exponentiation) because ^ has higher precedence than
prefix unary minus. Your mathematicians should be happy. :-)

Traditionally in mathematics circumflex (hat) means something like
"vector".

Unit-vectors IIRC. But that was applied over the variable.

Right, you type X <backspace> ^ and get it nicely printed over X on a
dot matrix printer. (:-))

Perhaps ^ was supposed to look like an up-arrow:

https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation

Up-arrow makes sense as poor man's superscript.

But I don't know where using it for power-of originated. I use ^ to mean pointer dereference, taken from Pascal.

I prefer implicit dereference.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Mon Nov 8 21:12:29 2021
On 08/11/2021 17:36, Dmitry A. Kazakov wrote:
On 2021-11-08 17:58, Bart wrote:

...

But I don't know where using it for power-of originated. I use ^ to
mean pointer dereference, taken from Pascal.

I prefer implicit dereference.

If pointers are implicitly dereferenced what do you do when you want to
get the pointer's value?

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Dmitry A. Kazakov@21:1/5 to James Harris on Mon Nov 8 22:54:28 2021
On 2021-11-08 22:12, James Harris wrote:
On 08/11/2021 17:36, Dmitry A. Kazakov wrote:
On 2021-11-08 17:58, Bart wrote:

...

But I don't know where using it for power-of originated. I use ^ to
mean pointer dereference, taken from Pascal.

I prefer implicit dereference.

If pointers are implicitly dereferenced what do you do when you want to
get the pointer's value?

Nothing dramatic. When the target has the pointer type then that is the pointer's value. When the target has the target type then that is dereferencing.

The problems arise with type inference or bottom-up stuff you and Bart
promote. But I want none of these.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Mon Nov 8 22:55:05 2021
On 08/11/2021 16:37, Dmitry A. Kazakov wrote:
On 2021-11-08 16:39, James Harris wrote:
On 08/11/2021 12:29, Dmitry A. Kazakov wrote:
On 2021-11-08 12:21, James Harris wrote:

ATM I follow the latter (fourth) approach. My operators are
therefore in order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)

...

Which is why your rules make no sense in the first place:

A + not B

is never

not (A + B)

and

A * -B

is never

- (A * B)

And

- not A

is never

not (-A)

Those are strange examples. You say they'd never match but I wouldn't
expect them to.

These are examples why unary operation should have higher precedence.

AFAICS having higher precedence does not mean rewriting the expression
with the operators in a different order! Take your first one, A + not B.
I would parse that as

A + (not B)

FWIW I'd parse the others as

A * (- B)
- (not A)

should "not" really had lower precedence than "-".

Well, think of

not A > B
not A <= - F()

Even when not is bit-wise?

not A + B * C + D

For bitwise not I use ! so

! A + B * C + D

which as the bitwise operators have high precedence would parse as

(! A) + (B * C) + D

Boolean operators have low precedence so boolean not would parse as

not (A + (B * C) + D)

And you have the problem of mixing unary operators having different precedence:

- not A

E.g. in

- not A + B * C + D

It is inconsistent.

Where's the inconsistency? If you mean bitnot that would parse as

(- (! A)) + (B * C) + D

...

The natural precedence used by sane languages:

1. Namespace, member extraction A.B[C]  means "[]"("."(A,B),C)
2. Indexing

Why split namespace and indexing when the operations can be mixed
freely? E.g.

X[i].field.subfield[j].datum

Compare:

field.subfield[j]  --->  (field.subfield)[j]

with

field+subfield[j]  --->  field + (subfield[j])

Arithmetic + is neither namespace nor indexing so I don't see the point.

...

4.1. Exponentiation (**)
4.2. Multiplicative (*, /)
4.4. Comparisons (<, >, =)
4.5. Lattice (and, or, xor)

Most of those are standard. I have

^    exponentiation
*/   times, divide, remainder etc
...  all the comparison operators with one precedence
...  all the boolean operators with their own precedences

4.1 would conflict with 3 for most readers with background in
mathematics:

-A**B

I would parse that 'correctly' as

- (A ^ B)

To me it is not obvious, why not

(-A) ^ B?

Three reasons:

1. Because ^ has higher precedence than unary minus.

2. Because that's the ordering used in maths.

3. Because the sign will be lost if raised to an even power.

And the counter example is this:

A ^ -B

I would parse as

A ^ (- B)

If ^ preceded -, then that should too become

-(A ^ B)

As mentioned above, I don't see why you would rearrange the operators.

My choice would be allow

A ^ -B  ---> A ^ (-B)
-A ^ B  ---> Syntax error, give me parenthesis.

I do the first. The second is - (A ^ B). Not sure why you would want
that to be a syntax error.

So, unary always precedes dyadic, except the metas.

Understood but 'metas' such as . ( [ * and & can be normal operators. No
need to parse them separately.

...

A**(B**C)

That's how I would parse it - right to left. I think that
exponentiation is the only operator in my entire table which is
right-to-left.

You could have pipelining operators:

"Buddy" >> Wide_Space >> "Hello" >> Stream

I don't know what they are but it doesn't matter. I can make any
operator right associative just by making its precedence an odd number.

Also all prefix operators are right-to-left. Consider this:

* not 0x000FF0

(machine address inverted and then accessed)

I don't know that L-R or R-L applies to prefix or postfix operators.

To bitwise invert that address and then dereference it I would use
something like

(! 16'000FF0')*

Do you really want it to mutate into:

not (*X)

?

Again you are rearranging operators! Maybe I'm too tired to think but I
don't recall that ever being a thing.

...

But as I said it is rather controversial. I would just forbid mixing
arithmetic and lattice operators.

Fine. Though what would a programmer do if he had a genuine need to
'mix' them?

Require disambiguation per parenthesis.

Do you mean so that

A + B and C - D

would become either

(A + B) and (C - D)

or

A + (B and C) - D

?

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Dmitry A. Kazakov@21:1/5 to James Harris on Tue Nov 9 09:51:22 2021
On 2021-11-08 23:55, James Harris wrote:
On 08/11/2021 16:37, Dmitry A. Kazakov wrote:
On 2021-11-08 16:39, James Harris wrote:
On 08/11/2021 12:29, Dmitry A. Kazakov wrote:
On 2021-11-08 12:21, James Harris wrote:

ATM I follow the latter (fourth) approach. My operators are
therefore in order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)

...

Which is why your rules make no sense in the first place:

A + not B

is never

not (A + B)

and

A * -B

is never

- (A * B)

And

- not A

is never

not (-A)

Those are strange examples. You say they'd never match but I wouldn't
expect them to.

These are examples why unary operation should have higher precedence.

AFAICS having higher precedence does not mean rewriting the expression
with the operators in a different order! Take your first one, A + not B.
I would parse that as

A + (not B)

I see the source of confusion. You want to have the precedence on the
right side of "not" very low, but the one on the left side is very high.
That gives you:

A + not B + C ---> A + (not (B + C))

When *both* sides are low, the result is

A + not B + C ---> not ((A + B) + C)

So, when you said low precedence you meant only the right side of.

I prefer balanced precedences for unary operations (both very high):

A + not B + C ---> A + (not B) + C

The natural precedence used by sane languages:

1. Namespace, member extraction A.B[C]  means "[]"("."(A,B),C)
2. Indexing

Why split namespace and indexing when the operations can be mixed
freely? E.g.

X[i].field.subfield[j].datum

Compare:

field.subfield[j]  --->  (field.subfield)[j]

with

field+subfield[j]  --->  field + (subfield[j])

Arithmetic + is neither namespace nor indexing so I don't see the point.

The point is that "." has a higher precedence than [] and "+" has a
lower one. You cannot have them in the same class.

...

4.1. Exponentiation (**)
4.2. Multiplicative (*, /)
4.4. Comparisons (<, >, =)
4.5. Lattice (and, or, xor)

Most of those are standard. I have

^    exponentiation
*/   times, divide, remainder etc
...  all the comparison operators with one precedence
...  all the boolean operators with their own precedences

4.1 would conflict with 3 for most readers with background in
mathematics:

-A**B

I would parse that 'correctly' as

- (A ^ B)

To me it is not obvious, why not

(-A) ^ B?

Three reasons:

1. Because ^ has higher precedence than unary minus.

But sir, it has a lower precedence! (:-))

2. Because that's the ordering used in maths.

Well, in mathematics it would be

-Aᵇ

There is no confusion because B is in superscript.

3. Because the sign will be lost if raised to an even power.

B looks very uneven today. (:-))

My choice would be allow

A ^ -B  ---> A ^ (-B)
-A ^ B  ---> Syntax error, give me parenthesis.

I do the first. The second is - (A ^ B). Not sure why you would want
that to be a syntax error.

Because with *balanced* precedences it would mean (-A)^B.
So, unary always precedes dyadic, except the metas.

Understood but 'metas' such as . ( [ * and & can be normal operators. No
need to parse them separately.

Sure.

A**(B**C)

That's how I would parse it - right to left. I think that
exponentiation is the only operator in my entire table which is
right-to-left.

You could have pipelining operators:

"Buddy" >> Wide_Space >> "Hello" >> Stream

I don't know what they are but it doesn't matter.

Just an example of another right-to-left operator.

I can make any
operator right associative just by making its precedence an odd number.

You could use unbalanced precedences, since you already have them. For a

left precedence > right precedence

X op Y op Z ---> X op<<Y op Z ---> (X op Y) op Z

left precedence < right precedence

X op Y op Z ---> X op Y>>op Z ---> X op (Y op Z)

Fine. Though what would a programmer do if he had a genuine need to
'mix' them?

Require disambiguation per parenthesis.

Do you mean so that

A + B and C - D

would become either

(A + B) and (C - D)

or

A + (B and C) - D

I mean flagging any confusing syntax illegal and require the programmer
to clarify thigs using parenthesis.

A + B and C - D

(A + B) and (C - D)

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Tue Nov 9 11:02:12 2021
On 08/11/2021 21:54, Dmitry A. Kazakov wrote:
On 2021-11-08 22:12, James Harris wrote:
On 08/11/2021 17:36, Dmitry A. Kazakov wrote:
On 2021-11-08 17:58, Bart wrote:

...

But I don't know where using it for power-of originated. I use ^ to
mean pointer dereference, taken from Pascal.

I prefer implicit dereference.

If pointers are implicitly dereferenced what do you do when you want
to get the pointer's value?

Nothing dramatic. When the target has the pointer type then that is the pointer's value. When the target has the target type then that is dereferencing.

The problems arise with type inference or bottom-up stuff you and Bart promote. But I want none of these.

I don't follow. If n is a reference or pointer to a node then I presume
you want to refer to the node as

n

but what if you want the value of n, e.g. to print it, rather than the
value of the node it points at?

I ask because I have thought of allowing a programmer to make some
pointers auto dereferencing but I recognise that in some situations a programmer might want to access the value of the pointer rather than the
value of what it points at.

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Dmitry A. Kazakov@21:1/5 to James Harris on Tue Nov 9 12:44:28 2021
On 2021-11-09 12:02, James Harris wrote:
On 08/11/2021 21:54, Dmitry A. Kazakov wrote:
On 2021-11-08 22:12, James Harris wrote:
On 08/11/2021 17:36, Dmitry A. Kazakov wrote:
On 2021-11-08 17:58, Bart wrote:

...

But I don't know where using it for power-of originated. I use ^ to
mean pointer dereference, taken from Pascal.

I prefer implicit dereference.

If pointers are implicitly dereferenced what do you do when you want
to get the pointer's value?

Nothing dramatic. When the target has the pointer type then that is
the pointer's value. When the target has the target type then that is
dereferencing.

The problems arise with type inference or bottom-up stuff you and Bart
promote. But I want none of these.

I don't follow. If n is a reference or pointer to a node then I presume
you want to refer to the node as

n

but what if you want the value of n, e.g. to print it, rather than the
value of the node it points at?

You mean have two overloaded functions:

function Image (X : Node) return String;
function Image (X : Node_Ptr) return String;

Both would apply to n (of the type Node_Ptr). Right?

In that case you would have to disambiguate

Image (n)

E.g. in Ada you would use a qualified expression:

Image (Node_Ptr'(n)) -- Pointer to string
Image (Node'(n)) -- Node to string

or a fully qualified name of Image if they are declared in different
modules or whatever other method of resolving conflicting overloads in

Furthermore you can decide that

function Image (X : Node) return String;

*overrides*

function Image (X : Node_Ptr) return String;

(remember that pesky OO?) and therefore

Image (n)

is unambiguous and means

Image (Node'(n))

Then if the programmer wanted rather the overridden meaning, he would
have to qualify it:

Image (Node_Ptr'(n))

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Wed Nov 10 08:33:01 2021
On 09/11/2021 08:51, Dmitry A. Kazakov wrote:
On 2021-11-08 23:55, James Harris wrote:
On 08/11/2021 16:37, Dmitry A. Kazakov wrote:
On 2021-11-08 16:39, James Harris wrote:
On 08/11/2021 12:29, Dmitry A. Kazakov wrote:

...

The natural precedence used by sane languages:

1. Namespace, member extraction A.B[C]  means "[]"("."(A,B),C)
2. Indexing

Why split namespace and indexing when the operations can be mixed
freely? E.g.

X[i].field.subfield[j].datum

Compare:

field.subfield[j]  --->  (field.subfield)[j]

with

field+subfield[j]  --->  field + (subfield[j])

Arithmetic + is neither namespace nor indexing so I don't see the point.

The point is that "." has a higher precedence than [] and "+" has a
lower one. You cannot have them in the same class.

In your example both . and [] have higher precedence than + so + is
irrelevant to whether . and [] should be in the same or different
precedence levels. Further, both . and [] are postfix. Therefore you
could have a chain of them such as

<OP> X.field1[i].field2[j].field3[k]

and that chain will extend for as long as the precedence of . or [] is
higher than the precedence of the operator to the left variable X which
I've called <OP>. Now, if . and [] have higher precedence than all
others which could appear to the left of the variable name (which in
your scheme they do) then however long the chain is all of it will be
applied before <OP>. Therefore I put it to you that they do not have to
have separate priority levels.

It's easiest to see with a priority diagram. Digits indicate precedences
of operators and v is a variable. If you have

5v787878

then all of the 7s and 8s will be applied (one at a time, and left to
right because they are trailing operators) before the 5.

Am not ignoring other points in your post but will have to come back to
them.

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Dmitry A. Kazakov@21:1/5 to James Harris on Wed Nov 10 10:59:01 2021
On 2021-11-10 09:33, James Harris wrote:
On 09/11/2021 08:51, Dmitry A. Kazakov wrote:
On 2021-11-08 23:55, James Harris wrote:
On 08/11/2021 16:37, Dmitry A. Kazakov wrote:
On 2021-11-08 16:39, James Harris wrote:
On 08/11/2021 12:29, Dmitry A. Kazakov wrote:

...

The natural precedence used by sane languages:

1. Namespace, member extraction A.B[C]  means "[]"("."(A,B),C)
2. Indexing

Why split namespace and indexing when the operations can be mixed
freely? E.g.

X[i].field.subfield[j].datum

Compare:

field.subfield[j]  --->  (field.subfield)[j]

with

field+subfield[j]  --->  field + (subfield[j])

Arithmetic + is neither namespace nor indexing so I don't see the point.

The point is that "." has a higher precedence than [] and "+" has a
lower one. You cannot have them in the same class.

In your example both . and [] have higher precedence than + so + is irrelevant to whether . and [] should be in the same or different
precedence levels. Further, both . and [] are postfix.

? "." is obviously dyadic. [] is not an operator.

There is no such thing as "same" precedence, in the end you must always
decide which one takes over:

A.B[C] "." takes precedence over []
A+B[C] [] takes precedence over +

On the second thought, you could argue that . and [] have same "base precedence" combined with the left-to-right rule like in the case of "+"
and "-". That should work, I guess.

Therefore you
could have a chain of them such as

<OP> X.field1[i].field2[j].field3[k]

and that chain will extend for as long as the precedence of . or [] is
higher than the precedence of the operator to the left variable X which
I've called <OP>. Now, if . and [] have higher precedence than all
others which could appear to the left of the variable name (which in
your scheme they do) then however long the chain is all of it will be
applied before <OP>.

Irrelevant to the issue whether

X.field1[i].field2[j].field3[k]

means

(((X.field1)[i].field2)[j].field3)[k]

or

((X.(field1[i])).(field2[j])).(field3[k])

or something else.

But OK, I think one could put them together as long left-to-right holds.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Wed Nov 10 11:33:48 2021
On 10/11/2021 09:59, Dmitry A. Kazakov wrote:

...

? "." is obviously dyadic. [] is not an operator.

Agreed, though [ can be parsed in part as a dyadic operator.

There is no such thing as "same" precedence, in the end you must always decide which one takes over:

A.B[C]   "." takes precedence over []
A+B[C]   [] takes precedence over +

On the second thought, you could argue that . and [] have same "base precedence" combined with the left-to-right rule like in the case of "+"
and "-". That should work, I guess.

Yes. I would add that the tie has to be broken by an associativity rule
even in the case when a certain 'operator' can be repeated as in either of

A.B.C
A[B][C]

...

But OK, I think one could put them together as long left-to-right holds.

Agreed.

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Wed Nov 10 11:16:53 2021
XPost: sljfsdjfl

On 09/11/2021 08:51, Dmitry A. Kazakov wrote:
On 2021-11-08 23:55, James Harris wrote:
On 08/11/2021 16:37, Dmitry A. Kazakov wrote:
On 2021-11-08 16:39, James Harris wrote:
On 08/11/2021 12:29, Dmitry A. Kazakov wrote:

...

These are examples why unary operation should have higher precedence.

AFAICS having higher precedence does not mean rewriting the expression
with the operators in a different order! Take your first one, A + not
B. I would parse that as

A + (not B)

I see the source of confusion.

You say that to raise my hopes only to dash them in the next sentence! :-(

You want to have the precedence on the
right side of "not" very low, but the one on the left side is very high.

No, not at all!

I am finding this a fascinating subthread because I have been completely baffled as to what you have in mind and I suspect you are thinking the
same about me. But there has to be a way through.

On precedences I did, many years ago, look at parsing an expression by
giving operators different precedences on left and right (there is at
least one parsing algorithm which espouses that approach) but I chose
not to. All of my operators have just one precedence.

That still leaves the confusion. However, ...

That gives you:

A + not B + C  --->  A + (not (B + C))

When *both* sides are low, the result is

A + not B + C  --->  not ((A + B) + C)

I think I may have realised where the confusion is coming from. Some key points:

Are you aware that expressions (as we typically use them) have two
contexts? One might call them prefix and postfix.

prefix - pre a subexpression, looking for a value
postfix - post a subexpression, looking for an operator

A valid subexpression needs

one prefix phase
zero or more postfix phases

E.g. the subexpression

X

has the required prefix phase (ending in a value) and zero postfix phases.

As for the /extent/ of a subexpression, in simple terms a subexpression
ends when we get to an operator of lower precedence than the context in
which the subexpression began. For example, in

A / B ** C + D

the internal subexpression B ** C ends after C because + has lower
precedence than /.

In fact, with left-associative operators it is more accurate to say that
a subexpression ends when we encounter an operator whose precedence is
less than or equal to the precedence with which the subexpression began,
and it is convenient to say that the end of the entire expression has precedence 0 so that is also automatically recognised as the end of any subexpression.

Finally, operators which appear in the prefix phase cannot end an
expression because they do not satisfy the requirement that the prefix
phase has to result in a value. But they can call the expression parser
to resolve what follows then apply their operations to the result and
thus end that phase (and potentially the entire expression) with a value.

Still with me? If it seems strange please reread. This is all standard expression parsing as may be used in Ada, Basic, C, etc.

To avoid making this post any longer maybe I should stop here. Can you
see how the above allows your example expression to be parsed
irrespective of whether 'not' has higher or lower precedence than the
operators which surround it?

...

-A**B

I would parse that 'correctly' as

- (A ^ B)

To me it is not obvious, why not

(-A) ^ B?

Three reasons:

1. Because ^ has higher precedence than unary minus.

But sir, it has a lower precedence! (:-))

^^^^^
I've found the unbalanced parentheses you keep talking about! ;-)

I have exponentiation as having higher precedence than unary minus.
Parsed as per the comments above.

2. Because that's the ordering used in maths.

Well, in mathematics it would be

-Aᵇ

There is no confusion because B is in superscript.

Yes, so AISI - A ^ B shoulc be - (A ^ B) as in maths.

3. Because the sign will be lost if raised to an even power.

B looks very uneven today. (:-))

Some days are like that! But B is often a constant (and often 2).

To cut down the post a but I've snipped a lot of examples but feel free
to come back to them.

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to All on Wed Nov 10 12:27:09 2021
On 10/11/2021 11:16, James Harris wrote:

to' so that I cannot send the message by mistake (some too-easy-to-press-by-mistake key combinations of my newsreader do that). However, in this case the message went even with the rubbish string
(sljfsdjfl) in place. I don't know why, or where that message would have
gone other than to comp.lang.misc but it's probably nowhere useful!

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to Dmitry A. Kazakov on Sat Feb 12 18:10:00 2022
On 09/11/2021 11:44, Dmitry A. Kazakov wrote:
On 2021-11-09 12:02, James Harris wrote:
On 08/11/2021 21:54, Dmitry A. Kazakov wrote:
On 2021-11-08 22:12, James Harris wrote:
On 08/11/2021 17:36, Dmitry A. Kazakov wrote:
On 2021-11-08 17:58, Bart wrote:

...

But I don't know where using it for power-of originated. I use ^
to mean pointer dereference, taken from Pascal.

I prefer implicit dereference.

If pointers are implicitly dereferenced what do you do when you want
to get the pointer's value?

Nothing dramatic. When the target has the pointer type then that is
the pointer's value. When the target has the target type then that is
dereferencing.

The problems arise with type inference or bottom-up stuff you and
Bart promote. But I want none of these.

I don't follow. If n is a reference or pointer to a node then I
presume you want to refer to the node as

n

but what if you want the value of n, e.g. to print it, rather than the
value of the node it points at?

That's not what I was thinking but it's an interesting idea.

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From James Harris@21:1/5 to James Harris on Sat Feb 12 18:35:53 2022
On 12/02/2022 18:10, James Harris wrote:
On 09/11/2021 11:44, Dmitry A. Kazakov wrote:
On 2021-11-09 12:02, James Harris wrote:
On 08/11/2021 21:54, Dmitry A. Kazakov wrote:
On 2021-11-08 22:12, James Harris wrote:
On 08/11/2021 17:36, Dmitry A. Kazakov wrote:
On 2021-11-08 17:58, Bart wrote:

...

But I don't know where using it for power-of originated. I use ^ >>>>>>> to mean pointer dereference, taken from Pascal.

I prefer implicit dereference.

If pointers are implicitly dereferenced what do you do when you
want to get the pointer's value?

Nothing dramatic. When the target has the pointer type then that is
the pointer's value. When the target has the target type then that
is dereferencing.

The problems arise with type inference or bottom-up stuff you and
Bart promote. But I want none of these.

I don't follow. If n is a reference or pointer to a node then I
presume you want to refer to the node as

n

but what if you want the value of n, e.g. to print it, rather than
the value of the node it points at?

That's not what I was thinking but it's an interesting idea.

Oops, I cut off the part I meant to reply to. It was Dmitry's suggestion
functions, one which accessed the pointer itself and another which
followed the pointer to access its referent.

--
James Harris

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Alexei A. Frounze@21:1/5 to James Harris on Sun Feb 13 14:22:36 2022
On Monday, November 8, 2021 at 3:21:47 AM UTC-8, James Harris wrote:
[Again, joining late, haven't read all of the conversation.]
...
ATM I follow the latter (fourth) approach. My operators are therefore in order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)
...

If I were to redo C's "operator precedence", which is flawed for historical reasons (B and/or BCPL to Blame, handy!), I'd reduce the total number of
the levels in the binary operators (fewer to remember):
1. *, /, %, <<, >>, &
(* and << kinda multiply, / and >> kinda divide, % and & kinda compute modulo, * and & also kinda multiply)
2. +, -, |, ^
(+, | and ^ kinda add)
3. ==, !=, <=, <, >, >=
(no good reason to separate these)
4. &&
5. ||
6. ?:
I'd likely make =, +=, ++ and such non-expressions and drop the comma
operator altogether.
I'd probably rework ?: as well (make it left-associative), not entirely sure.

Alex

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Bart@21:1/5 to Alexei A. Frounze on Mon Feb 14 01:05:50 2022
On 13/02/2022 22:22, Alexei A. Frounze wrote:
On Monday, November 8, 2021 at 3:21:47 AM UTC-8, James Harris wrote:
[Again, joining late, haven't read all of the conversation.]
...
ATM I follow the latter (fourth) approach. My operators are therefore in
order (high to low)

2. Bit-pattern manipulation (&, !, etc)
3. Arithmetic manipulation (+, *, etc)
4. Comparisons (<, !=, etc)
5. Logical manipulation (AND, NOT, etc)
...

If I were to redo C's "operator precedence", which is flawed for historical reasons (B and/or BCPL to Blame, handy!), I'd reduce the total number of
the levels in the binary operators (fewer to remember):
1. *, /, %, <<, >>, &
(* and << kinda multiply, / and >> kinda divide, % and & kinda compute modulo,
* and & also kinda multiply)
2. +, -, |, ^
(+, | and ^ kinda add)
3. ==, !=, <=, <, >, >=
(no good reason to separate these)
4. &&
5. ||
6. ?:
I'd likely make =, +=, ++ and such non-expressions and drop the comma operator altogether.
I'd probably rework ?: as well (make it left-associative), not entirely sure.

This is not far off the groupings I use. For the operators listed (and
using those same names), they are:

1 * / % << >> (shifts scale the value)

2 + - & | ^ (all bitwise ops are the same; they don't scale, so
they belong neither above or below, so why not here)

3 == != <= < > >=

4 &&

5 !!

For ?:, I have that as syntax, not an operator, and it requires
parentheses: ( a ? b : c), so that precedence is never an issue.

But it is very easy to improve on C.

3. ==, !=, <=, <, >, >=
(no good reason to separate these)

People do come up with rationales for C having them in two groups (and
for having & | ^ at different levels) apparently every bad decision in C
has a benefit if you look hard enough; everything is a feature!

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Alexei A. Frounze@21:1/5 to Bart on Sun Feb 13 18:01:48 2022
On Sunday, February 13, 2022 at 5:05:51 PM UTC-8, Bart wrote:
On 13/02/2022 22:22, Alexei A. Frounze wrote:
If I were to redo C's "operator precedence", which is flawed for historical reasons (B and/or BCPL to Blame, handy!), I'd reduce the total number of the levels in the binary operators (fewer to remember):
1. *, /, %, <<, >>, &
(* and << kinda multiply, / and >> kinda divide, % and & kinda compute modulo,
* and & also kinda multiply)
2. +, -, |, ^
(+, | and ^ kinda add)
3. ==, !=, <=, <, >, >=
(no good reason to separate these)
4. &&
5. ||
6. ?:
I'd likely make =, +=, ++ and such non-expressions and drop the comma operator altogether.
I'd probably rework ?: as well (make it left-associative), not entirely sure.
This is not far off the groupings I use. For the operators listed (and
using those same names), they are:

1 * / % << >> (shifts scale the value)

2 + - & | ^ (all bitwise ops are the same; they don't scale, so
they belong neither above or below, so why not here)

If I view bitwise operator operands as just collections of individual/ independent bits, then separating & from | (and ^) lets me use the
same precedence as often used in computer literature (& and |
can actually be spelled as "AND" and "OR" or some other symbol).
And there for individual bits they use & just like everybody normally
uses * and they use | just like everybody normally uses +.
The rules for * and + are very similar to those for & and |.
0 * 0 = 0 = 0 & 0
0 * 1 = 0 = 0 & 1
1 * 0 = 0 = 1 & 0
1 * 1 = 1 = 1 & 1
0 + 0 = 0 = 0 | 0
0 + 1 = 1 = 0 | 1
1 + 0 = 1 = 1 | 0
1 + 1 = non-zero = 1 | 1
a * b = b * a
a + b = b + a
a & b = b & a
a | b = b | a
(a * b) * c = a * (b * c)
(a + b) + c = a + (b + c)
(a & b) & c = a & (b & c)
(a | b) | c = a | (b | c)
(a + b) * c = (a * c) + (b * c)
(a | b) & c = (a & c) | (b & c)
So, it looks pretty natural to me to treat & and | similar to * and +.

Alex

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)
• From Bart@21:1/5 to Alexei A. Frounze on Mon Feb 14 19:16:40 2022
On 14/02/2022 02:01, Alexei A. Frounze wrote:
On Sunday, February 13, 2022 at 5:05:51 PM UTC-8, Bart wrote:
On 13/02/2022 22:22, Alexei A. Frounze wrote:
If I were to redo C's "operator precedence", which is flawed for historical >>> reasons (B and/or BCPL to Blame, handy!), I'd reduce the total number of >>> the levels in the binary operators (fewer to remember):
1. *, /, %, <<, >>, &
(* and << kinda multiply, / and >> kinda divide, % and & kinda compute modulo,
* and & also kinda multiply)
2. +, -, |, ^
(+, | and ^ kinda add)
3. ==, !=, <=, <, >, >=
(no good reason to separate these)
4. &&
5. ||
6. ?:
I'd likely make =, +=, ++ and such non-expressions and drop the comma
operator altogether.
I'd probably rework ?: as well (make it left-associative), not entirely sure.
This is not far off the groupings I use. For the operators listed (and
using those same names), they are:

1 * / % << >> (shifts scale the value)

2 + - & | ^ (all bitwise ops are the same; they don't scale, so
they belong neither above or below, so why not here)

If I view bitwise operator operands as just collections of individual/ independent bits, then separating & from | (and ^) lets me use the
same precedence as often used in computer literature (& and |
can actually be spelled as "AND" and "OR" or some other symbol).
And there for individual bits they use & just like everybody normally
uses * and they use | just like everybody normally uses +.
The rules for * and + are very similar to those for & and |.
0 * 0 = 0 = 0 & 0
0 * 1 = 0 = 0 & 1
1 * 0 = 0 = 1 & 0
1 * 1 = 1 = 1 & 1
0 + 0 = 0 = 0 | 0
0 + 1 = 1 = 0 | 1
1 + 0 = 1 = 1 | 0
1 + 1 = non-zero = 1 | 1
a * b = b * a
a + b = b + a
a & b = b & a
a | b = b | a
(a * b) * c = a * (b * c)
(a + b) + c = a + (b + c)
(a & b) & c = a & (b & c)
(a | b) | c = a | (b | c)
(a + b) * c = (a * c) + (b * c)
(a | b) & c = (a & c) | (b & c)
So, it looks pretty natural to me to treat & and | similar to * and +.

I think of it in ALU terms. If you have an N-bit API, and give it inputs
A and B, with output C, then for bitwise operations, each output bit
C[i] corresponds only to A[i] & B[i] etc, independently of all the other
bits.

There is no shifting or scaling.

With add and subtract, these would also apply, so that A[i]+B[i] gives a
result C[i] which is either 0 or 1; but there is now also a carry or
borrow bit that can propagate through the result. I don't consider that shifting (except that doing A+A is equivalent to A << 1).

In any case, they are close enough to be treated as the same precedence
(also I can't think of a reason for & | ^ to be higher or lower).

Multiplication and division can be implemented with adds, subtracts and
shifts, while << and >> are pure shifts. So they clearly belong in the
same group. (A<<B is also just A*(2**B))

--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)