• What is the meaning of an expression?

    From Roger L Costello@21:1/5 to All on Fri Jan 14 12:15:06 2022
    Hello Compiler Experts!

    In some book I read this statement:

    The meaning of an expression is
    the value of the expression.

    For example, the meaning of this expression:

    1 + 1

    is 2.

    Originally I thought I read the statement in some math book, but after searching through my books and after talking to some mathematicians, I believe that I did not read it in any math book.

    Today while reading the Bison manual, I noticed in it a sentence that said:

    ... the meaning of a variable ...

    Aha! Perhaps it was in a Bison book or a compiler book that I read the statement. Is the statement something that you would say? If yes, why do you say it? The mathematicians and linguists that I spoke to thought the statement was crazy. Perhaps the statement is appropriate in the context of compilers, but not elsewhere? Also, from your perspective are these two statements equivalent:

    The meaning of an expression is
    the value of the expression.

    The semantics of an expression is
    the value of the expression.

    Do you always use the word "meaning" or do you sometimes use the word "semantics"? Do the two words mean (no pun intended) the same thing to you, from a compiler perspective?

    /Roger
    [I think the meaning here is not to believe everything you read. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Roger L Costello on Fri Jan 14 18:20:48 2022
    Roger L Costello <costello@mitre.org> writes:
    Do you always use the word "meaning" or do you sometimes use the word >"semantics"? Do the two words mean (no pun intended) the same thing to you, >from a compiler perspective?

    "Semantics" is more technical, and not completely synonymous with
    "meaning". When talking about the meaning of a piece of code, we tend
    to differentiate between syntax (what is described by a context-free
    grammar) and semantics (everything beyond that). Semantics is divided
    into static semantics (how to interpret the code at compile time), and
    run-time semantics (what happens at run-time). Whether you refer to everything, or just some parts of that when using "meaning" depends on
    the context (although I don't think I have seen it used for syntax only).

    - anton
    --
    M. Anton Ertl
    anton@mips.complang.tuwien.ac.at
    http://www.complang.tuwien.ac.at/anton/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christopher F Clark@21:1/5 to All on Fri Jan 14 22:42:58 2022
    I wouldn't use either of the statements in a compiler construction context:

    The meaning of an expression is the value of an expression.
    The semantics of an expression is the value of an expression.

    Although if I had to imagine using one of those two sentences, I could
    vaguely imagine using the former? As in "what does 1 + 1 mean?" It
    means "add one and one together, the answer being two". But, even in
    that case, I wouldn't make it a blanket statement. You have to
    contrive an example where you would use a statement like that and it
    isn't particularly natural. Most importantly, it isn't something
    general.

    This is as close as I can imagine coming:
    "In a value oriented language, the meaning of an expression is the
    value of an expression".

    So, I heartily agree with our esteemed moderator. Don't believe
    everything you read.
    -- ****************************************************************************** Chris Clark email: christopher.f.clark@compiler-resources.com Compiler Resources, Inc. Web Site: http://world.std.com/~compres
    23 Bailey Rd voice: (508) 435-5016
    Berlin, MA 01503 USA twitter: @intel_chris ------------------------------------------------------------------------------

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans-Peter Diettrich@21:1/5 to Roger L Costello on Sat Jan 15 00:28:03 2022
    On 1/14/22 1:15 PM, Roger L Costello wrote:

    In some book I read this statement:

    The meaning of an expression is
    the value of the expression.

    Perhaps in contrast to the meaning of a loop in code?


    Today while reading the Bison manual, I noticed in it a sentence that said:

    ... the meaning of a variable ...

    It may mean a state of the program?


    You know what I mean:

    Expression: I want that evaluated!
    Loop: I want that executed repeatedly!
    Variable: I want that value rememberd!
    #error: I want the compilation aborted!


    [I think the meaning here is not to believe everything you read. -John]

    :-)

    DoDi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Roger L Costello on Fri Jan 14 17:58:01 2022
    On Friday, January 14, 2022 at 9:40:24 AM UTC-8, Roger L Costello wrote:
    Hello Compiler Experts!

    In some book I read this statement:

    The meaning of an expression is
    the value of the expression.

    I think that is wrong.

    C is a little strange as languages go, but you can have an expression statement like:

    1 + 1

    which says to add one and one, and then ignore the result. It has a value, but no meaning.
    I suspect most compilers won't even do it, but I never looked.

    More common is a function call with side effects, and ignore the value.

    printf("Hi there!");

    is an expression with the value ignored, but with a meaning.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From matt.timmermans@gmail.com@21:1/5 to All on Sat Jan 15 06:21:08 2022
    The meaning of an expression is the value of the expression.

    That is not true. This might be said in a magazine article written for laymen about programming languages, or in a philosophical context that doesn't refer to practical work.

    The semantics of an expression is the value of the expression.

    That's not right either. The semantics of the *language* determine the
    meaning of expressions written in that language.

    The meaning of an expression is what you communicate to the compiler or to readers by writing that expression.

    In C, for example,
    int a = 1+1;

    means "declare a variable named a of type int, an initialize it with the value produced by adding the integers 1 and 1".

    The expression part of this, "1 + 1", means "the value produced by adding the integers 1 and 1". This is *not* the same as "2". The compiler may
    determine that it's equivalent to "2", and will *probably* not write out any actual addition instructions, but what you *wrote* is an addition, and its meaning is determined by the semantics of addition as defined in C.

    Of course, expressions in most languages can also include function calls and operators that produce side effects, like "printf("%d",++i);", which certainly has a meaning even though it produces no meaningful value.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to costello@mitre.org on Sat Jan 15 02:05:10 2022
    On Fri, 14 Jan 2022 12:15:06 +0000, Roger L Costello
    <costello@mitre.org> wrote:

    Hello Compiler Experts!

    In some book I read this statement:

    The meaning of an expression is
    the value of the expression.

    For example, the meaning of this expression:

    1 + 1

    is 2.

    Just a guess, but it is possible that such a statement might have
    accompanied an example of syntax directed translation.


    Is the statement something that you would say?

    Informally speaking to someone, I could see myself saying something
    like that. I would never write anything so profound.

    No matter what the context, I would (try to) make certain that it was understood that the statement pertained only to whatever currently was
    under discussion.


    The mathematicians and linguists that I spoke to thought the statement
    was crazy.

    No doubt. <grin>

    The statement is not "crazy" per se, but certainly it is context
    dependent and doesn't have any meaning beyond that context.


    YMMV,
    George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jan Ziak@21:1/5 to Roger L Costello on Sun Jan 16 07:44:23 2022
    On Friday, January 14, 2022 at 6:40:24 PM UTC+1, Roger L Costello wrote:
    For example, the meaning of this expression:

    1 + 1

    is 2.

    The meaning of 1+1 is a transition between two states. The machine is in state A before the expression is processed and is in state B after the processing ends (usually, A and B are different states, albeit in some cases A is the
    same as B). A and B are stored in cells of memory (usually: A and B are distributed over many binary cells). The binary cells can in some cases influence the emission of photons (such as: bits of GPU's frame-buffer; the placement of particles of black powder on a white paper coming from a printing machine).

    -atom
    [We seem to be pretty deep in this tarpit now. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jan Ziak@21:1/5 to Jan Ziak on Mon Jan 17 15:45:10 2022
    On Sunday, January 16, 2022 at 6:27:58 PM UTC+1, Jan Ziak wrote:
    On Friday, January 14, 2022 at 6:40:24 PM UTC+1, Roger L Costello wrote:
    For example, the meaning of this expression:

    1 + 1

    is 2.

    The meaning of 1+1 is a transition between two states. ....

    -atom
    [We seem to be pretty deep in this tarpit now. -John]

    @John It's not about a tarpit. If a person believes that the [only] meaning of 1+1 is 2 then it increases the probability that the person does not know about modulo arithmetic: (1+1 mod 2) is 0. Thus, it is better to adopt the more general viewpoint that the meaning of an expression is a transition between
    two [machine] states. A next concern is the complexity of those states.

    -atom
    [Now we're even deeper in the tarpit. Is the "meaning" a mathematical statement,
    an instruction to a compiler to generate code computing the value of an expression,
    something else? I don't know, and it's pretty clear none of the rest of us do either.
    -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jan Ziak@21:1/5 to All on Tue Jan 18 10:03:06 2022
    Now we're even deeper in the tarpit. Is the "meaning" a mathematical statement,
    an instruction to a compiler to generate code computing the value of an expression,
    something else? I don't know, and it's pretty clear none of the rest of us do either.
    -John

    In my opinion, your comments are moving this discussion in the wrong
    direction. In the context of compilers: a compiler isn't in control of the meaning of the source code input into the compiler, because the input might be a Universal Turing Machine. The compiler knows for certain that the source
    code is moving a machine from one state to another state (while not in a terminating state: perform a state transition) while the source code is being executed - a compiler isn't trying to answer the question of what the _final_ meanings of those state transitions are. If the compiler tried to answer such
    a question, then the compiler might never stop compiling the source code. The set of terminating states is specified by the source code, not by the
    compiler.

    -atom
    [If you go back the the start of the thread, someone asked about
    a statement that "The meaning of an expression is the value of the expression." I suppose we could ask what it means to a compiler but that seems awfully anthropomorphic. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to and our moderator on Tue Jan 18 15:18:51 2022
    (snip, and our moderator wrote)

    [If you go back the the start of the thread, someone asked about
    a statement that "The meaning of an expression is the value of the expression."
    I suppose we could ask what it means to a compiler but that seems awfully anthropomorphic. -John]

    This is reminding me too much about the episodes of Star Trek
    where Kirk and Spock have to destroy a computer. (None of which
    ever seem to have a compiler.)

    They give a logical inconsistency, which the computer works
    on for a while, until smoke comes out and we know it is dead.
    (And the computers always talk, making the anthropomorphising
    easier.)

    As for actual compilers, what do they do with logical inconsistencies?

    As above, normally they should just compile what they are given,
    and not try to actually process it. But more and more compilers optimize
    code, which means (often enough) that they do processing that would
    otherwise be done at run time.

    My favorite optimization story, which I heard many years ago when it
    was already old, is from the OS/360 Fortran H days. It seems that
    there was a popular Fortran benchmark program that evaluated
    some very complicated expression, mostly with the use of statement
    functions. (The sometimes useful, but now deprecated Fortran
    feature.) Fortran H expands statement function inline (it seems
    unlike others at the time.)

    The result of this, was that the compiler evaluated the
    whole complex expression at compile time (very slowly),
    and printed the result at run time (very quickly).

    So, back to anthropomorphic computers and logical
    inconsistencies. How good are compilers, especially ones
    that evaluate constant expressions at compile time, at
    dealing with logic failure? And especially, as the question
    needs, expressions that don't have a value?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans-Peter Diettrich@21:1/5 to All on Wed Jan 19 11:54:00 2022
    On 1/19/22 12:18 AM, gah4 wrote:

    So, back to anthropomorphic computers and logical
    inconsistencies. How good are compilers, especially ones
    that evaluate constant expressions at compile time, at
    dealing with logic failure?

    Optimization is a special science. A compiler might evaluate a constant expression properly, in the sense that evaluation at runtime might fail
    due to overflows of too narrow types in compiled code.

    And especially, as the question
    needs, expressions that don't have a value?

    Aren't these called *statements*?
    Syntax does not normally allow for expressions without values,
    semantics disallow the use of subroutines without a return type as part
    of an expression.

    Expressions always have a value, but if that value is not used further
    then the compiler can ignore that part of the source code. Problems can
    arise from unrecognized side effects or exceptions eliminated by dead
    code elimination.

    DoDi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christopher F Clark@21:1/5 to All on Wed Jan 19 20:13:18 2022
    Not to continue to belabor this overly drawn-out discussion, but
    gah4's question of feeding a compiler a logical inconsistency and
    having it crash is vaguely relevant.

    I have two examples. The first one, hearsay, from what I consider
    normally reliable sources.

    The template mechanism of C++ is Turing Complete and unfortunately,
    some programmers have taken advantage of that in an attempt to
    optimize their programs to an unreasonable extent. The result is
    compilations that take hours for programs that run in seconds and may
    only be run once. I can only imagine someone writing an NP-complete
    problem, say 3-SAT as a template and wondering why their compilation
    never finishes when they give it a problem with one hundred variables.

    The template mechanism may be Turing Complete, but the implementation
    doesn't have all the limitations and declarations that make programs
    tractable. Moreover, no one has spent years optimizing the template
    processor for such cases, and it is not clear that it would be
    possible to do so. We don't know how to solve the halting problem. And
    the template processor is supposed to *be* the Oracle, not require
    one.

    The second one was from my own experience working on the Alpha
    optimizer at DEC. The C++ compiler was one of the last pieces of
    software to use the one based upon Fred Chow's work. That was my responsibility at the time (the optimizer, not the C++ compiler). In
    any case, the front end writers were very aggressive in inlining
    subroutines and doing whole-program optimization by that trick. The
    result was very massive files of the intermediate representation. The
    result, for certain large (and important) C++ programs the compiler
    would work for days before it had filled up all the paging space on
    the engineering cluster at DEC, which was multiple disks of paging and
    as a result crashed not only the compiler, but in some cases the
    entire cluster. This made for some very unhappy users, because they
    had waited days and they still didn't get a successful compilation.

    Fortunately, a small amount of analysis on my part allowed me to
    realize that most of the optimizers data structures while N-squared in
    size, were actually filled with mostly zero values. Thus, with a
    couple days of work I was able to come up with easily implemented
    compression schemes that drastically reduced that footprint. The
    result was that the compilation finished in mere minutes, not days,
    and more importantly, actually finished, not filled the paging disk
    and crashed.

    -----

    Thus, in that respect the "meaning" of an expression might be vaguely
    relevant if the compiler is required to "figure it out". And, so
    while it doesn't take a logical inconsistency to crash a compiler, we
    do have the means to do so. It isn't actually that hard.

    -- ****************************************************************************** Chris Clark email: christopher.f.clark@compiler-resources.com Compiler Resources, Inc. Web Site: http://world.std.com/~compres
    23 Bailey Rd voice: (508) 435-5016
    Berlin, MA 01503 USA twitter: @intel_chris ------------------------------------------------------------------------------

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jan Ziak@21:1/5 to All on Wed Jan 19 09:17:17 2022
    On Wednesday, January 19, 2022 at 4:13:36 PM UTC+1, Hans-Peter Diettrich
    wrote:
    On 1/19/22 12:18 AM, gah4 wrote:

    So, back to anthropomorphic computers and logical
    inconsistencies. How good are compilers, especially ones
    that evaluate constant expressions at compile time, at
    dealing with logic failure?
    Optimization is a special science. A compiler might evaluate a constant expression properly, in the sense that evaluation at runtime might fail
    due to overflows of too narrow types in compiled code.

    An analogy: There exist two kinds of compression algorithms: lossy and lossless. Similarly, there exist two kinds of optimization algorithms. If it
    is a lossless optimization algorithm, then any kind of difference between compile-time evaluation and run-time evaluation is a software bug in the optimizer. If it is a lossy optimization algorithm, then it should be sufficiently clearly specified what kind of information is permitted to be
    lost during the optimization process.

    And especially, as the question
    needs, expressions that don't have a value?
    Aren't these called *statements*?

    One could imagine that, as a minimal requirement that has to be fulfilled by any statement, the "value" of a statement X must include information about
    what the next statement is after X is done executing, unless the programmer or the compiler proves that the statement never terminates by itself.

    Back to the original post by Roger L Costello: A problem with both "The
    meaning of an expression is its value" and "The semantics of an expression is the value of the expression" is that some expressions never return a value because their values aren't computable; we are only able to observe that,
    while the expression is being computed/evaluated, it is executing as a
    sequence of well-defined steps where each of the steps is known to evaluate/terminate quickly. If a person adopts the belief that "The meaning/semantics of an expression is its value" then consequently the person will later be forced to adopt the belief that it is impossible to know the meaning of certain kinds of expressions.

    -atom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Roger L Costello on Wed Jan 19 20:51:12 2022
    Roger L Costello <costello@mitre.org> schrieb:

    In some book I read this statement:

    The meaning of an expression is
    the value of the expression.

    Jumping in late...

    Computer science teminology can unfortunately be imprecise, and
    different people and different documents use different words for
    meaing the same thing, and vice versa.

    If you want to know what expression means in a particular language,
    look at its standards documents. For example, Fortran states
    (F2018, 10.1.1)

    # An expression represents either a data object reference
    # or a computation, and its value is either a scalar or an
    # array. Evaluation of an expression produces a value, which has a
    # type, type parameters (if appropriate), and a shape (10.1.9).

    whereas C states (n2596)

    # An expression is a sequence of operators and operands that
    # specifies computation of a value,92) or that designates an object
    # or a function, or that generates side effects, or that performs
    # a combination thereof.

    so the two languages obviously have different meanings for the
    term, and applying one definition to the other language is likely
    to lead to confusion (such as about side effects in Fortran
    expressions).

    I find no definiton of "meaining" in either standard, so although
    your question contains the word "meaning", I do not think it
    is meaningful. Know what I mean?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to All on Wed Jan 19 14:03:05 2022
    (snip, I wrote)
    So, back to anthropomorphic computers and logical
    inconsistencies. How good are compilers, especially ones
    that evaluate constant expressions at compile time, at
    dealing with logic failure?

    (snip)
    An analogy: There exist two kinds of compression algorithms: lossy and lossless. Similarly, there exist two kinds of optimization algorithms. If it is a lossless optimization algorithm, then any kind of difference between compile-time evaluation and run-time evaluation is a software bug in the optimizer. If it is a lossy optimization algorithm, then it should be sufficiently clearly specified what kind of information is permitted to be lost during the optimization process.

    I find the other way around more interesting.

    There is a popular Fortran compiler that uses an high precision arithmetic package to evaluate constant expressions. As I understand it, it will accurately calculate sin(1e100). At run time, though, you only have
    normal machine precision.

    In most science/engineering problems, data has an uncertainty, and
    floating point well represents data with a relative uncertainty. There
    is no reasonable answer to sin(x) when x has an uncertainty
    of more than 2*pi. (There is an extremely small branch of mathematics
    where problems like this might occur. But never in physics or engineering.)

    But in any case, calculating constant expressions more accurately
    (or less accurately) means that they won't be equal to the same thing calculated at run time. (Now, people are supposed to know not to
    compare floating point values for equality, but it is still surprising.)

    Early Fortran didn't have constant expressions, so any such evaluation
    was optional optimization. More recent versions do, though, so there
    are some things that the compiler is required to evaluate at compile time.

    I believe that there are cases equivalent to the mentioned C++ cases.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans-Peter Diettrich@21:1/5 to Jan Ziak on Thu Jan 20 13:02:34 2022
    On 1/19/22 6:17 PM, Jan Ziak wrote:
    On Wednesday, January 19, 2022 at 4:13:36 PM UTC+1, Hans-Peter Diettrich wrote:
    On 1/19/22 12:18 AM, gah4 wrote:

    And especially, as the question
    needs, expressions that don't have a value?
    Aren't these called *statements*?

    One could imagine that, as a minimal requirement that has to be fulfilled by any statement, the "value" of a statement X must include information about what the next statement is after X is done executing, unless the programmer or
    the compiler proves that the statement never terminates by itself.

    Can you please specify the language you have in mind?

    Languages like C or Pascal distinguish expressions from statements in
    their grammar. Allowed is to ignore the value (result) of an expression evaluation but not to use a non-expression as a value.


    Back to the original post by Roger L Costello: A problem with both "The meaning of an expression is its value" and "The semantics of an expression is the value of the expression" is that some expressions never return a value

    ARAIR K&R C defined the value of a function call to be the value
    contained in the accumulator after return. A decision with horrible consequences if you look at compiler and library source code of that
    time. OTOH it derogates the meaning of an expression if at any time one
    can find a value in the defined result register.

    So I think that it's moot to discuss obscure languages that do not
    strictly distinguish expressions from statements. At the abstract level
    we could try to define what *is* an expression, with my favorite:
    "Evaluation of an expression yields a value."
    Then it should be clear that the meaning of an expression is a value.

    DoDi
    [Early C didn't have default return values, but since the compilers also
    didn't do much type checking, I can believe there was a code that
    worked by accident becaues the value of the last expression in a function happened to be in the register where the caller looked for the result.
    It's sort of like the Berkeley bug, that there was always a zero byte at
    memory location zero so dereferencing null pointers sort of worked. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Christopher F Clark on Sat Jan 22 20:46:29 2022
    Christopher F Clark <christopher.f.clark@compiler-resources.com> schrieb:

    The second one was from my own experience working on the Alpha
    optimizer at DEC. The C++ compiler was one of the last pieces of
    software to use the one based upon Fred Chow's work. That was my responsibility at the time (the optimizer, not the C++ compiler). In
    any case, the front end writers were very aggressive in inlining
    subroutines and doing whole-program optimization by that trick.

    When to inline and when not to inline is still a major issue today.

    The result was very massive files of the intermediate representation. The result, for certain large (and important) C++ programs the compiler
    would work for days before it had filled up all the paging space on
    the engineering cluster at DEC, which was multiple disks of paging and
    as a result crashed not only the compiler, but in some cases the
    entire cluster. This made for some very unhappy users, because they
    had waited days and they still didn't get a successful compilation.

    https://xkcd.com/303/ and https://dilbert.com/strip/1998-06-04
    come to mind :-)


    Fortunately, a small amount of analysis on my part allowed me to
    realize that most of the optimizers data structures while N-squared in
    size, were actually filled with mostly zero values.

    Quadratic algorithms (if in space or in time) are a progressively
    bad idea. Programs are getting bigger, and anybody who thinkis
    a quadratic algrorithm is OK, will be hit sooner or later with a
    (real-life) test case which brings it out into the open.

    Just a random example: Some time ago, a test case for gcc was
    submitted which used up hours and Gigabytes for compiliation.
    It consisted of a single basic block in a subroutine, with thousands
    of variables and thousands of assignments (getting translated into
    ten thousands of SSA statements).

    This code was not written by a human, but by a computer algrebra
    system expanding some complicated formula.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From dave_thompson_2@comcast.net@21:1/5 to DrDiettrich1@netscape.net on Sun Jan 30 22:51:54 2022
    On Thu, 20 Jan 2022 13:02:34 +0100, Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:
    ...
    ARAIR K&R C defined the value of a function call to be the value
    contained in the accumulator after return. A decision with horrible consequences if you look at compiler and library source code of that
    time. OTOH it derogates the meaning of an expression if at any time one
    can find a value in the defined result register.
    ...
    [Early C didn't have default return values, but since the compilers also didn't do much type checking, I can believe there was a code that
    worked by accident becaues the value of the last expression in a function happened to be in the register where the caller looked for the result. ...]

    Through K&R1 C didn't have 'void' -- all functions had some return
    type, which if not written defaulted to 'int' (as in BCPL and B). The _compilers_ did typecheck but not most _linkers_ so a mismatch across separately-compiled files was one way to get this problem; one of the
    features of 'lint' was to catch such. (FORTRAN had the same issue, and
    there were some tools for it, but I don't recall one as prominent.
    COBOL, at least then, didn't have value-returning subprograms,
    although it could have mismatch on arguments. I don't recall what
    Pascals did -- when they had separate compilation at all -- and never
    saw a non-toy algol. C++ overloading effectively required typesafe
    linkage, and Ada assumed a 'repository' preventing mismatches.)

    Another way is if you 'fall off the end' of a (non-void) function, or
    through the first standard (C89/90) you wrote and executed a return
    statement with no expression; before void this was widely used for
    functions with no useful value. If the callsite doesn't discard the
    notional 'value' of a call that does one of these, it is officially
    undefined behavior and in practice usually takes whatever value is
    lying about in the register used by the calling sequence -- but on
    PDP-11 and Interdata at least that wasn't 'the accumulator' because
    they had no such thing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Johann 'Myrkraverk' Oskarsson@21:1/5 to matt.ti...@gmail.com on Thu Feb 3 12:50:15 2022
    On 1/15/2022 2:21 PM, matt.ti...@gmail.com wrote:

    Of course, expressions in most languages can also include function calls and operators that produce side effects, like "printf("%d",++i);", which certainly
    has a meaning even though it produces no meaningful value.

    Actually, it does. It returns the number of characters written, or -1
    on error. Therefore, you can write a /meaningful/ hello world like
    this,

    int main( int argc, char *argv[] ) { return printf( "hello\n" ); }

    Of course, people generally don't like to put the return before the
    printf(), and there's no telling what the operating system will do;
    think OpenVMS.

    --
    Johann | email: invalid -> com | www.myrkraverk.com/blog/
    I'm not from the Internet, I just work there. | twitter: @myrkraverk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)