• Languages with optional spaces

    From Ev. Drikos@21:1/5 to Maury Markowitz on Sun Feb 23 12:33:36 2020
    On 19/02/2020 17:35, Maury Markowitz wrote:
    ... Likewise, one might modify the variable name
    pattern, but I'm not sure how one says "everything that doesn't start with one
    of these other 110 patterns".


    This should be relatively simple with a scanner generator that supports intersection and negation operators (or difference operators), but IMHO
    such a rule would work ie if FORI wasn't a valid variable name. Still I
    can't propose to you a specific tool though.

    With a tool like flex, I'd try to see if different start states do work: https://www.cs.virginia.edu/~cr4bd/flex-manual/Start-Conditions.html

    Ev. Drikos
    [You can try start states but in my experience if the tokenizing rules
    are very context sensitive, it's easier to give up and hand-code the
    lexer. The lexical syntax of Basic isn't that big. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Maury Markowitz@21:1/5 to All on Tue Feb 25 06:11:18 2020
    So it seems the most basic answer is "no, it won't do that". Perhaps this should be a feature? "Tokens are immediate terminals" or somesuch.

    I originally chose flex/bison because it is widely available on almost any platform (although on Windows it is sub-optimal) and I found a reasonable BASIC starting point online.

    In any event, it does seem the practical solution is to make my own lexer. Luckily there are any number that exist already in cross-platform form.

    Perhaps you all have some starting points I should look at? I'm most interested in plain C although I am definitely going to look at Jerry's code.
    [Writing a lexer for Basic shouldn't be very hard since the token structure is pretty straightforward. The hardest part
    is being sure you detect and deal with invalid input in all the places it can occur. C lexers tend to be loops with
    big switch statements on the input characters, and you need to remember to have a default in all of them. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Maury Markowitz@21:1/5 to All on Tue Feb 25 06:13:23 2020
    Unless I'm mistaken, Integer BASIC is a "deep tokenizer", so is it not the case the spaces exist only at entry time? IE, if one types 'PRINT"HELLO"' that will actually be LISTed as 'PRINT "HELLO"?
    [That is my recollection. Storing tokens takes less space than storing the input text and on those tiny little
    micros, every byte counted. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Maury Markowitz on Wed Feb 26 08:06:04 2020
    On 2020-02-19, Maury Markowitz <maury.markowitz@gmail.com> wrote:
    I'm trying to write a lex/yacc (flex/bison) interpreter for classic BASICs like the original DEC/MS, HP/DG etc. I have it mostly working for a good chunk
    of 101 BASIC Games (DEF FN is the last feature to add).

    Then I got to Super Star Trek. To save memory, SST removes most spaces, so lines look like this:

    100FORI=1TO10

    Here's my current patterns that match bits of this line:

    FOR { return FOR; }

    [:,;()\^=+\-*/\<\>] { return yytext[0]; }

    [0-9]*[0-9.][0-9]*([Ee][-+]?[0-9]+)? {
    yylval.d = atof(yytext);
    return NUMBER;
    }

    "FN"?[A-Za-z@][A-Za-z0-9_]*[\$%\!#]? {
    yylval.s = g_string_new(yytext);
    return IDENTIFIER;
    }

    These correctly pick out some parts, numbers and = for instance, so it sees:

    100 FORI = 1 TO 10

    The problem is that FORI part. Some BASICs allow variable names with more than
    two characters, so in theory, FORI could be a variable. These BASICs outlaw that in their parsers; any string that starts with a keyword exits then, so this would always parse as FOR. In lex, FORI is longer than FOR, so it returns
    a variable token called FORI.

    Is there a way to represent this in lex? Over on Stack Overflow the only suggestion seemed to be to use trailing syntax on the keywords, but that appears to require modifying every one of simple patterns for keywords with some extra (and ugly) syntax. Likewise, one might modify the variable name pattern, but I'm not sure how one says "everything that doesn't start with one
    of these other 110 patterns".

    Two ideas:

    1. Just forget recognizing variable names in the lexer. Instead,
    recognize only the constituent letter of a variable name in the lexer.
    Then in the parser, have a grammar production which converts
    the letters of a variable into a variable.

    variable : VARCHAR
    | variable VARCHAR
    ;

    2. Use regex patterns in the lexer to recognize just the keywords,
    as a above. Then, recognition of variable names is handled by
    matching just one letter A-Z, whose lex action performs ad-hoc
    lexical analysis using C logic. At that point you know that you do not
    have a keyword, because no keyword rule matched. You can read
    characters using YYIN and accumulate a variable name.

    A variant of technique (2) is used for scanning C comments,
    as an alternative to an ugly regular expression:

    "/*" {
    int c;

    while ((c = yyinput()) != 0)
    {
    if (c == '\n') {
    /* increment line number or something */
    }
    else if (c == '*')
    {
    if ((c = yyinput()) == '/')
    break;
    else
    unput(c);
    }
    }
    }

    The above is an adaptation of something from an old Flex manual.
    IIRC the Dragon Book has a similar example of ad-hoc logic
    in a lex rule for handling C comments.

    You can see that it's a similar idea. We use a regex to partially match
    the comment, just the /* opening. Then we take over from there.

    I have a hunch this would work for fetching variables like FORI, when
    there is no match on a keyword like FOR.

    --
    TXR Programming Lanuage: http://nongnu.org/txr
    Music DIY Mailing List: http://www.kylheku.com/diy
    ADA MP-1 Mailing List: http://www.kylheku.com/mp1

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From awanderin@21:1/5 to Maury Markowitz on Wed Feb 26 10:03:11 2020
    Maury Markowitz <maury.markowitz@gmail.com> writes:

    Unless I'm mistaken, Integer BASIC is a "deep tokenizer", so is it not
    the case the spaces exist only at entry time? IE, if one types
    'PRINT"HELLO"' that will actually be LISTed as 'PRINT "HELLO"?
    [That is my recollection. Storing tokens takes less space than
    storing the input text and on those tiny little
    micros, every byte counted. -John]

    In the Apple II Microsoft BASIC, yes, the tokenizer removed spaces
    except within strings. On the Commodore machines, their version of
    Microsoft BASIC did not purge the spaces. This allowed the programmer
    to "indent" the code (after the line number). The tokens, like PRINT,
    FOR, INPUT, etc, are stored as one byte in all the Microsoft BASICs.
    The Apple II Integer BASIC also tokenizes to one byte and removes
    spaces, but it did more syntax checking at entry time, so it was more a
    lexer + parser than a lexer like Microsoft BASIC.

    --
    --
    Jerry awanderin at gmail dot com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Ward@21:1/5 to Ev. Drikos on Tue Feb 25 17:00:57 2020
    On 23/02/20 10:33, Ev. Drikos wrote:
    [You can try start states but in my experience if the tokenizing rules
    are very context sensitive, it's easier to give up and hand-code the
    lexer. The lexical syntax of Basic isn't that big. -John]

    The BASIC for the Compukit UK101 and Ohio Superboard stores
    each line as a sequence of characters. As a line is entered
    any keywords appearing anywhere on the line are replaced
    by control characters. Keywords are not allowed anywhere
    within a variable name. So any statement either starts
    with a keyword (a control character), which determines
    the statement type, or it is an assignment with the LET
    keyword omitted.

    Due to the small amount of memory available, variable names
    are typically kept short and spaces are omitted. Short variable
    names are less likely to include an embedded keyword.
    Also, only the first two characters of a variable name are significant.
    The program will also run faster with short variable names:
    since the BASIC interpreter scans each line as it is interpreted.

    More information and some sample BASIC software:

    http://www.gkc.org.uk/martin/software/#UK101

    A procedurally generated, open world, RTS (real time strategy) game
    with destructable environments which runs in 8K of RAM:

    http://www.gkc.org.uk/martin/software/startrek.html

    --
    Martin

    Dr Martin Ward | Email: martin@gkc.org.uk | http://www.gkc.org.uk G.K.Chesterton site: http://www.gkc.org.uk/gkc | Erdos number: 4

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ev. Drikos@21:1/5 to Martin Ward on Fri Feb 28 13:34:40 2020
    On 25/02/2020 19:00, Martin Ward wrote:
    ...So any statement either starts
    with a keyword (a control character), which determines
    the statement type, or it is an assignment with the LET
    keyword omitted...


    A scanner generator that supports ie "difference" operators can build a
    DFA that splits ie the text FORI to FOR & I without space overhead, if
    the reserved keywords are also tokens of the lexer.

    The lexical rules below result ie to a DFA with 10 states. Even if I
    erase the difference operator "-=" till the end of the document, the
    DFA built still has 10 states but the text FORI is returned as an "id".

    So, the rules below ensure that an "id" doesn't start with FOR or PRINT,
    not sure though if this is what the OP really wanted once he said that
    in theory FORI could be a variable (perhaps it can't be just an LHS?).


    Ev. Drikos

    -----------------------------------------------------------------------
    token ::=
    FOR
    | PRINT
    | id

    FOR ::=
    F O R

    PRINT ::=
    P R I N T

    id ::=
    {A..Z [{$|A .. Z|0 .. 9}...]} -= {key[{$|A .. Z|0..9}...]}

    key ::=
    FOR
    | PRINT

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christopher F Clark@21:1/5 to All on Sat Feb 29 11:48:41 2020
    "Ev. Drikos" <drikosev@gmail.com> posted an interesting albeit partial solution to the problem of keywords being part of identifiers in languages with
    optional spaces.
    I won't include it here.

    The problem is that some keywords can appear at places other than the
    beginning of an identifier.
    In fact, in the worst case scenario, the language can be ambiguous.
    Consider the following "BASIC" program extended with variables that
    are more than one letter long
    and spaces being optional.

    10 LET ITO = 1
    20 LET I = 2
    30 LET JTOK = 3
    40 LET K = 4
    50 FOR N = ITOJTOK
    60 REM AMBIGUOUS FOR N = I TO JTOK
    70 REM OR FOR N = ITOJ TO K
    80 PRINT N;
    90 NEXT N
    100 END

    The problem with such solutions is one is tempted to "fix" them one by
    one as they are encountered.

    Maury Markowitz <maury.markowitz@gmail.com> mentioned this in his post
    where ATO was considered.
    It could be A TO or AT O (presuming that TO and AT are both keywords)
    Note that this is even an issue with 1 letter variable names if one
    has both keywords.

    As one starts patching up these cases, the "grammar"
    (or its recursive descent implementation most likely)
    begins to become what I call "ad hack".

    With a GLR parser (or something equivalent in power, e.g. an Earley
    parser or CYK) and a lexer that returns all possible sets of
    tokenizations one can find all the relevant parse trees and then see
    if only 1 makes semantic sense.

    In the above example, that won't help as both interpretations are
    legal programs.
    One prints 2 3, the other 1 2 3 4.

    I cannot imagine a programmer being happy with the error message:
    LINE 50 AMBIGUOUS STATEMENT.

    -- ****************************************************************************** Chris Clark email: christopher.f.clark@compiler-resources.com Compiler Resources, Inc. Web Site: http://world.std.com/~compres
    23 Bailey Rd voice: (508) 435-5016
    Berlin, MA 01503 USA twitter: @intel_chris ------------------------------------------------------------------------------ [I get the impression that more often than not, whoever wrote the interpreter didn't give it much thought so the grammar is whatever the 6502 code did thirty years ago. Fortran was ugly but at least it wasn't ambiguous and at each
    point the lexer knew what tokens were valid. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ev. Drikos@21:1/5 to Christopher F Clark on Sat Feb 29 21:38:14 2020
    On 29/02/2020 11:48, Christopher F Clark wrote:
    "Ev. Drikos" <drikosev@gmail.com> posted an interesting albeit partial solution
    to the problem of keywords being part of identifiers in languages with optional spaces.

    It's only an example for the beginning of statements, based on the
    description I read in other postings. It couldn't be the solution of
    an unspecified problem. The terms "usually" & "in theory" aren't specs.

    In contrast, one who wants to code a parser for the particular dialect
    of BASIC should know ie what the 6502 or the Compukit UK101 code did.

    The problem is that some keywords can appear at places other than the beginning of an identifier.
    In fact, in the worst case scenario, the language can be ambiguous.
    ...

    With a GLR parser (or something equivalent in power, e.g. an Earley
    parser or CYK) and a lexer that returns all possible sets of
    tokenizations one can find all the relevant parse trees and then see
    if only 1 makes semantic sense.

    Obviously, those who coded such BASIC parsers had some simpler rules,
    ie the position of the first 'TO' might be used for the statement 50.
    Such ruling may comply with the description given by Dr Martin Ward.

    Ev. Drikos

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christopher F Clark@21:1/5 to christopher.f.clark@compiler-resour on Sun Mar 1 10:07:52 2020
    I'm sorry if my reply sounded like a criticism of Ev's post. I didn't
    intend it to be. It was more pointing out that when one does things
    by hand, one often makes things that are challenging to formalize.

    I noticed this particular aspect as I was trying to imagine what would
    need to add to the lexer generator in Yacc++ (you can use lex (or
    flex) with it, but it also includes its own model which has extensions
    so that you can write LR rules in your lexer too and a few other
    things to minimize conflicts, integrate with a symbol table, etc).
    One of our goals with writing it was to capture in a formalizable way
    some of the important "hacks" one often used to solve problems (e.g.
    how to do PL/I style keywords).

    And this problem looked like one where if the hack could be distilled,
    it would make sense to do so. Not that I am a big fan of languages
    with optional spaces, but that doesn't mean that they might not
    represent something people need to handle.

    But John, our moderator, and Dodi are right in a particular sense.
    Certain things were done informally by hand "in the old days". And
    this actually points at another possible direction to a solution, PEGs
    (parsing expression grammars). It's a different way of attacking the
    ambiguous language problem. In PEGs, "or" constructs are not
    un-ordered, but instead you take them in a specific order the grammar
    writer chooses and the first one that applies is the correct one. You
    get somewhat the same effect with lex, where if two rules match you
    use the one specified lexically first. The only difference is that in
    lex, the longest match rule overrides that, where in a PEG, the
    ordering rule overrides (and you control the ordering at the caller
    level, by how you write the "or" not by the order of the rules that
    the "or" invokes).

    In my ambiguous example, a PEG like construct where the first "TO" is
    the rule that matches is probably not that hard to write.

    And, PEG rules can actually be added to LL and LR grammars. They came
    from syntactic predicates that Terence Parr invented (and we added
    those to Yacc++ by using the idea of "Tunnel Automata" pioneered by
    Josef Gro"lsch). If one goes down that path, it might be possible to
    actually capture what people have done by hand. I don't recall
    whether we added them to the lexer portion of the generator. So,
    now it's on my to-do list.

    On Sat, Feb 29, 2020 at 11:48 AM Christopher F Clark <christopher.f.clark@compiler-resources.com> wrote:

    "Ev. Drikos" <drikosev@gmail.com> posted an interesting albeit partial solution
    to the problem of keywords being part of identifiers in languages with optional spaces.
    I won't include it here.

    The problem is that some keywords can appear at places other than the beginning of an identifier.
    In fact, in the worst case scenario, the language can be ambiguous.
    Consider the following "BASIC" program extended with variables that
    are more than one letter long
    and spaces being optional.

    10 LET ITO = 1
    20 LET I = 2
    30 LET JTOK = 3
    40 LET K = 4
    50 FOR N = ITOJTOK
    60 REM AMBIGUOUS FOR N = I TO JTOK
    70 REM OR FOR N = ITOJ TO K
    80 PRINT N;
    90 NEXT N
    100 END

    The problem with such solutions is one is tempted to "fix" them one by
    one as they are encountered.

    Maury Markowitz <maury.markowitz@gmail.com> mentioned this in his post
    where ATO was considered.
    It could be A TO or AT O (presuming that TO and AT are both keywords)
    Note that this is even an issue with 1 letter variable names if one
    has both keywords.

    As one starts patching up these cases, the "grammar"
    (or its recursive descent implementation most likely)
    begins to become what I call "ad hack".

    With a GLR parser (or something equivalent in power, e.g. an Earley
    parser or CYK) and
    a lexer that returns all possible sets of tokenizations one can find
    all the relevant parse
    trees and then see if only 1 makes semantic sense.

    In the above example, that won't help as both interpretations are
    legal programs.
    One prints 2 3, the other 1 2 3 4.

    I cannot imagine a programmer being happy with the error message:
    LINE 50 AMBIGUOUS STATEMENT.
    -- ****************************************************************************** Chris Clark email: christopher.f.clark@compiler-resources.com Compiler Resources, Inc. Web Site: http://world.std.com/~compres
    23 Bailey Rd voice: (508) 435-5016
    Berlin, MA 01503 USA twitter: @intel_chris ------------------------------------------------------------------------------ [So how do you lex PL/I? It's not ambiguous, and I think you never need more than
    one token lookahead to decide whether a string of letters that looks like a keyword is instead a variable name as in

    IF IF = THEN; THEN ELSE = IF; ELSE THEN = IF;

    -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hans-Peter Diettrich@21:1/5 to All on Sun Mar 1 00:28:26 2020
    Am 29.02.2020 um 10:48 schrieb Christopher F Clark:

    With a GLR parser (or something equivalent in power, e.g. an Earley
    parser or CYK) and a lexer that returns all possible sets of
    tokenizations one can find all the relevant parse trees and then see
    if only 1 makes semantic sense.

    I wonder how those old coders could fit all that into about 4 KB ROM for
    their BASIC interpreter ;-)

    Isn't there a better solution nowadays than taking a sledgehammer to
    crack a nut? The reconstruction of the proper original syntax requires
    to disassemble the original code, so nothings hinders one from
    implementing just that code on new hardware.

    DoDi
    [I suppose it depends on whether you want to understand what the old 6502
    code did, or just run it. I'm also not sure I want to emulate whatever
    passed for an arithmetic package on those machines. -John]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ev. Drikos@21:1/5 to Ev. Drikos on Sun Mar 1 19:41:49 2020
    On 29/02/2020 21:38, Ev. Drikos wrote:
    On 29/02/2020 11:48, Christopher F Clark wrote:
    ...
    Obviously, those who coded such BASIC parsers had some simpler rules,
    ie the position of the first 'TO' might be used for the statement 50.


    Dear Mr. Clark,

    I'll elaborate a little if you don't mind. IMHO, one problem here is
    that an identifier before a keyword is quite difficult for a scanner.
    One could possibly let the parser recognize such obscure identifiers.

    Let's assume that the first 'TO' found in a for-statement after a '='
    is the keyword that separates the lower bound from the upper bound of
    the loop index.

    As a demo example, the simplified grammar at the end of this message
    fails to parse only the last statement (999) below, which looks like
    a known limitation of my Simulator. An actually generated C++ parser
    would need some time and effort that currently I don't plan to spend:

    10 let i=1
    11 letleti=1
    20 i=1
    30 for forj=1 to n
    40 FORFORJ=1TON
    50 FOR N = ITOJTOK
    60 FORFORJ=IIITONTOJ
    70 FORFORJ=TO TO TO
    80 FORFORJ=TTTON
    999 FORFORJ=TOTON


    Of course, this is far away from a working solution because one needs
    to know a lot of details, ie if any spaces read are indeed important.

    What would you suggest apart a hand coded lexer?

    Ev. Drikos


    ----------------------------------------------------------------------
    SYNTAX RULES ----------------------------------------------------------------------

    #sma <id>
    #sma <idl>
    #sma TO

    <grm> ::=
    <statements>

    <statements> ::=
    <statements> <statement>
    | <statement>

    <statement> ::=
    <for-stmt>
    | <assignment>
    | <empty>

    <for-stmt> ::=
    <label> FOR <id> = <lbound> TO <ubound> <;>

    <lbound> ::=
    <number>
    | <id-p>

    <id-p> ::=
    <id-p> <id-part>
    | <id-start>

    <id-part> ::=
    <letter>
    | <digit>
    | $

    <id-start> ::=
    <letter>

    <ubound> ::=
    <rhs>

    <rhs> ::=
    <id>
    | <number>

    <assignment> ::=
    <label> LET <id> = <rhs> <;>
    | <label> <idl> = <rhs> <;>

    <empty> ::=
    <;>



    ----------------------------------------------------------------------
    LEXICAL CONVENTIONS ----------------------------------------------------------------------

    #ignore spaces

    #sma <id>
    #sma <idl>

    token ::=
    spaces
    | END
    | FOR
    | LET
    | NEXT
    | PRINT
    | TO
    | <digit>
    | <label>
    | <number>
    | <letter>
    | <idl>
    | <id>
    | $
    | =
    | <;>

    spaces ::=
    { \t | \s }...

    END ::=
    E N D

    FOR ::=
    F O R

    LET ::=
    L E T

    NEXT ::=
    N E X T

    PRINT ::=
    P R I N T

    TO ::=
    T O

    <digit> ::=
    0 .. 9

    <label> ::=
    { 0 .. 9 }...

    <number> ::=
    { 0 .. 9 }...

    <letter> ::=
    A .. Z

    <idl> ::=
    { A .. Z [{$|A .. Z|0..9}...]} -= {key[{$|A..Z|0..9}...]}

    key ::=
    END
    | FOR
    | LET
    | NEXT
    | PRINT

    <id> ::=
    A .. Z [ { $ | A .. Z | 0 .. 9 }... ]

    <;> ::=
    ;
    | \n

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christopher F Clark@21:1/5 to All on Mon Mar 2 08:33:36 2020
    "Ev. Drikos" <drikosev@gmail.com> asked some very good questions to
    which I don't know if I have good answers. But, because he is
    seriously trying to address the question, I am going to try and help.
    I leave it up to him and you (other readers) whether my help is
    actually helpful or not. So, these are just my thoughts.

    And let's focus on the "TO" question. I think his partial solution
    works fine for the BASIC keywords at the start of statements because
    they always need to start the token, but "TO", "THEN", and "ELSE" are
    a different problem, They occur mid-statement.

    As far as I can tell (and I haven't thought seriously about BASIC for
    years), there are roughly two cases.

    1) We have already identified the lower bound in "lower bound TO upper
    bound". This can happen if we saw a number, or maybe an identifier
    without the word "TO" in it (followed by a space?). This will not be
    true, if we just saw an operator such as "+" as we will be expecting
    another identifier to continue the expression. So, knowing what the
    parser expects is definitely important here.

    In this case, the "TO" must be the next thing we see. So, we are back
    to the beginning of the statement problem, and we can use the solution
    already proposed as is.

    2) We need an identifier to complete the expression "lower bound TO
    upper bound". In this case, we need at least one letter (assuming
    that we aren't allowed to start identifiers with digits) before the
    word "TO" and the "TO" must be followed by a space, a letter or a
    digit, but not an operator character. So, you need a regular
    expression that looks something like this /[A-Za-z0-9]+TO[A-Za-z0-9 ]+/
    note the extra space in the regex class following the "TO".
    Hmm, you might need a left paren in that regex. The problem here is
    that you are basically expecting the lexer to do part of the parser's
    job, which is why such things tend to be very messy and likely fraught
    with errors or corner cases that are surprising and don't do what you
    want.

    Now, if you manage to get an appropriate regex, then you need to break
    that down into tokens. If you used a PCRE like regex package to do
    so, you might be able to use captures to do that for you.

    Note, if you are using the lex/yacc interface where the parser calls
    the lexer and expects it to return one token each time, you need to
    insert a buffer so that you have a place to push tokens that you want
    to return on subsequent calls. Coroutines are a better solution, but
    they have a slightly different API. We use a coroutine model
    (implemented with a buffer) in Yacc++ between our lexer and parser, to specifically deal with this (and to allow our lexer/parser to work in
    event driven systems where the flow of control is inverted).

    -----

    So, what would I do? I would do it in 3 parts. A lexer, a "fixer",
    and a parser. The lexer would just do the easy cases, returning
    identifiers separated by spaces (and punctuation), but possibly
    pulling off initial keywords at the start of statements (and probably
    only then).

    The "fixer" would be run on tokens in the middle of the statement and
    try to find these hard cases. It might need to know what the parser
    expects to get them right. If so, I would use a PEG parser (like
    packrat?) to get parsing capacity. If not, or if I can somehow direct
    the fixer with info from the parser (or use something like lexer
    states), I might use a PCRE scanner to pull things apart.

    Curiously, I suspect the fixer for all three special keywords looks
    the same. They all have similar context information. I don't know if
    that makes writing the relevant regex easier.

    After the fixer, the parser could probably be simple again.

    ----

    Worth noting is once-upon-a-time, Terence Parr (the PCCTS/ANTLR guy)
    proposed and perhaps implemented a structure like this, where there
    were "channels" and "filters" that could go between the lexer and the
    parser. (I copied him on this reply in case, he no longer reads this newsgroup.)

    You see a similar problem in implementing a C preprocessor. You have
    tokens for a lexer, but you need to re-process them into different
    tokens after doing macro substitution. So, this is not a one-off
    problem. It reoccurs in different contexts.

    However, I think a general solution is likely to be hard to formalize.
    That doesn't mean we should stop trying.

    -- ****************************************************************************** Chris Clark email: christopher.f.clark@compiler-resources.com Compiler Resources, Inc. Web Site: http://world.std.com/~compres
    23 Bailey Rd voice: (508) 435-5016
    Berlin, MA 01503 USA twitter: @intel_chris ------------------------------------------------------------------------------

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ev. Drikos@21:1/5 to Christopher F Clark on Mon Mar 2 20:04:56 2020
    On 02/03/2020 08:33, Christopher F Clark wrote:
    ...

    As far as I can tell (and I haven't thought seriously about BASIC for
    years), there are roughly two cases.

    Neither I have thought seriously about any BASIC Dialect so far. The
    example grammar in my previous message took me less than half an hour.

    1) We have already identified the lower bound in "lower bound TO upper bound". This can happen if we saw a number, or maybe an identifier
    without the word "TO" in it (followed by a space?). This will not be
    true, if we just saw an operator such as "+" as we will be expecting
    another identifier to continue the expression...

    One is supposed to progressively add syntax as needed. I just found such
    a case at www.gkc.org.uk/martin/software/UK101-tapes.zip:

    5120 FOR J=K+1TON4:A(K,J)=A(K,J)/P:NEXT J:I=1

    Here is a more complete rule that parses ie "512 FOR J=LOWER+1TOUPPER"

    <lbound> ::=
    <lbound> <rel-op> <lbound> ~: <rel-op>
    | <lbound> <add-op> <lbound>
    | <lbound> <mult-op> <lbound>
    | <add-op> <lbound> -: { <mult-op> }
    | <number>
    | <id-p>

    IMHO, the most difficult problems come with statements like "NEXT i,j"
    that imply a shared end statement of two FOR loops and BASIC doesn't
    seem to be as easy as Fortran where this problem has a simple solution: https://github.com/drikosev/Fortran/blob/master/OMP_Legacy_Parser.txt

    ...
    So, what would I do? I would do it in 3 parts. A lexer, a "fixer",
    and a parser...

    So, the suggestion is a fixer between the parser and the lexer. Ok, I appreciate your opinion.

    Ev. Drikos

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@u.washington.edu@21:1/5 to Maury Markowitz on Mon Mar 2 21:12:23 2020
    On Wednesday, February 19, 2020 at 8:24:02 AM UTC-8, Maury Markowitz wrote:
    I'm trying to write a lex/yacc (flex/bison) interpreter for classic BASICs like the original DEC/MS, HP/DG etc.

    I have it mostly working for a good chunk
    of 101 BASIC Games (DEF FN is the last feature to add).

    Then I got to Super Star Trek. To save memory, SST removes most spaces, so lines look like this:

    100FORI=1TO10

    The first BASIC systems I ever used were the HP TSB2000 systems
    that were very popular in the 1970's. These systems tokenize each
    line on entry, and also syntax check it. If a line fails, it isn't
    even stored. Extra spaces or none, the line is stored the same way.
    (And this is the system that many of the original Star Trek games
    were written on.) Also, numeric constants are converted to their
    internal representation. The LIST command converts the tokenized form
    back to text form, and adds appropriate blank space.

    Note also that the TSB2000 systems only allow the traditional
    single letter, or number-digit, form for variable names.

    The microcomputer BASIC systems that I remember tokenize lines, but
    don't do any more checking on them. As noted, blanks are stored and
    used when the LIST command displays the program.

    Looking at the GW-BASIC manual:

    http://www.divonasperi.it/divona/tam/tecnologia/dow-all/GW%20Basic%20(inglese).pdf

    and the PC-BASIC manual:

    https://robhagemans.github.io/pcbasic/doc/1.2/#memory

    (the latter intended to be bug-for-bug compatible with the MS version.)

    I don't see in either manual mention of the effects, or lack thereof,
    of blanks in statements. There is explanation of reserved words,
    and their possible use in variable names. It seems to me that
    the treatment of missing blanks is an accident of the parser design.

    Bug-for-bug emulation, then, needs to find all those accidents
    and implement them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ev. Drikos@21:1/5 to Jerry on Thu Mar 12 17:45:51 2020
    On 21/02/2020 08:38, Jerry wrote:
    class Lexer:
    """
    Lexer emulates quirky Applesoft tokenizer as closely as possible.

    Thus, it diverges from what a normal lexer would do. In
    particular, spaces are completely insignificant, except within
    strings, except if it finds "AT" followed by a space and then an
    "N" appears. That is parsed as "AT N", not "ATN". Fun?;)
    """

    Hello,

    Lacking any Python skills, I can't test the lexer but if this comment is accurate, then any spaces in literals should normally be ignored.

    The AppleSoft II manual in page 69 at www.classiccmp.org/cini/pdf/Apple/ implies that spaces are also important in literals after first position, similar comment in the Basic dialect supported by the Compukit UK101 at http://uk101.sourceforge.net/docs/index.html

    Rather than critique, it's a detail I'm interested in, see next question.

    Ev. Drikos

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gene@21:1/5 to Maury Markowitz on Tue Apr 14 10:08:31 2020
    On Wednesday, February 19, 2020 at 11:24:02 AM UTC-5, Maury Markowitz wrote:
    I'm trying to write a lex/yacc (flex/bison) interpreter for classic BASICs like the original DEC/MS, HP/DG etc. ...

    The problem is that FORI part. Some BASICs allow variable names with more than
    two characters, so in theory, FORI could be a variable. ...

    Is there a canonical cure for this sort of problem that isn't worse than the disease?

    Flex regexes alone don't have enough power. But you could add a conditional REJECT when a normal ID match has a prefix that's a keyword. (This could be made fast with a trie of that's an issue.) REJECT causes fallback to the "next best" rule, which seems to be what you want.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mertesthomas@gmail.com@21:1/5 to Maury Markowitz on Sun Apr 19 04:04:57 2020
    On 19/02/2020 17:35, Maury Markowitz wrote:
    I'm trying to write a lex/yacc (flex/bison) interpreter for classic BASICs like the original DEC/MS, HP/DG etc. I have it mostly working for a good chunk
    of 101 BASIC Games (DEF FN is the last feature to add).

    Then I got to Super Star Trek. To save memory, SST removes most spaces, so lines look like this:

    100FORI=1TO10

    You should take a look at Bas7 a Basic interpreter that I wrote years ago
    (see http://seed7.sourceforge.net/scrshots/bas7.htm).

    At this page I describe the different approaches I use to execute old Basic programs. Missing spaces is just one of the problems.

    I created Bas7 exactly for the purpose to execute Basic programs from the
    Book "Basic computer Games" from David H. Ahl.

    The Bas7 interpreter is written in Seed7. So you need to download Seed7
    to use Bas7, but this should not be a big problem. There is an installer at https://sourceforge.net/projects/seed7/files/bin/seed7_05_20191117_win.exe/download

    The installer is for Windows. For Linux you can download the source from https://sourceforge.net/projects/seed7/files/latest/download

    The file seed7/src/readme.txt describes what to do under Linux. Basically
    you switch to the seed7/src directory and do a make depend; make

    Regards,
    Thomas Mertes

    --
    Seed7 Homepage: http://seed7.sourceforge.net
    Seed7 - The extensible programming language: User defined statements
    and operators, abstract data types, templates without special
    syntax, OO with interfaces and multiple dispatch, statically typed,
    interpreted or compiled, portable, runs under linux/unix/windows.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From aston.goldsmith@gmail.com@21:1/5 to All on Tue May 5 13:05:18 2020
    Hi Maury...
    tomas mertes have a right Bas7 is a good example and i tried it few years
    back. Rexently i build my own lexer/tokenizer in Oxygen basic..yes in
    basic compiler for windows but if know and understand basic you should be able to midify it , here it is ;

    'micro(A) tokenizer by Aurel 29.3.2020
    Include "microBh.inc"
    INT startTime ,endTime: float procTime ' GetTickCount -timer init
    declare sub tokenizer( src as string) as INT
    declare sub run_tokenizer(inputCode as string) as INT
    int tkNULL=0, tkPLUS=1, tkMINUS=2, tkMULTI=3, tkDIVIDE=4
    int tkCOLON=5, tkCOMMA=6, tkLPAREN=7, tkRPAREN=8, tkLBRACKET=9, tkRBRACKET=10 int tkIDENT = 11 , tkNUMBER = 12 , tkQSTRING = 13, tkCOMMAND =14 ,tkEOL = 15 int tkEQUAL = 16, tkMORE = 17, tkLESS =18,tkAND=19, tkOR=20, tkNOT = 21
    int tkHASH=22 , tkSSTR=23, tkMOD=24

    string tokList[1024] : int typList[1024] 'token/type arrays int start , p = 1 ,start = p ,tp , tn, n ,ltp=1 ,nTokens ' nTokens -> number of tokens
    int lineCount, Lpar, Rpar, Lbrk, Rbrk, tokerr ,codeLen=0
    string code,ch,tch,tk ,crlf=chr(13)+chr(10),bf,ntk '--------------------------------------------------------------------
    'code = "2*(3+4)" + crlf + ' line 1
    '"': b =6 " + crlf + ' line 2
    ' ":if a>b" + crlf ' line 3
    ' ~~~~~~~~~~~~~~~~ MAIN TOKENIZER SUBROUTINE ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    SUB tokenizer(src as string) as int
    'print "tokenizer run;" + src
    lineCount=0:ltp=start : nTokens = 0
    while p <= len(src)
    '................................................................................................
    ch = mid(src,p,1) ' get char
    If asc(ch)=32 : p=p+1 : end if ' skip blank space[ ]
    If asc(ch)=9 : p=p+1 : end if ' skip TAB [ ]
    if asc(ch)=13 : p=p+1 : end if ' skip CR
    if asc(ch)=39 ' skip comment line[ ' ]
    while asc(ch) <> 10
    p++ : ch = mid(src,p,1) : if asc(ch)= 10 then exit while
    wend
    p++: goto endLoop ' jump to end of loop
    end if

    If asc(ch)=10 ' EOL
    if Lpar > Rpar : tokerr=3 : goto tokExit : end if ' if Rparen ((...)
    if Lpar < Rpar : tokerr=4 : goto tokExit : end if ' if Lparen (...))
    if Lbrk > Rbrk : tokerr=5 : goto tokExit : end if ' if Lbracket [..
    if Lbrk < Rbrk : tokerr=6 : goto tokExit : end if ' if Rbracket ...]
    lineCount++ : tp++ : tokList[tp]="EOL" :typList[tp]= tkEOL: tk="": ch="" : p++
    End if
    '--------------------------------------------------------
    If asc(ch)=34 ' if char is QUOTE "
    p++ : ch = mid(src,p,1) : tk=ch : p++ ' skip quote :add ch TO tk buffer: p+1
    while asc(ch) <> 34
    ch = mid(src,p,1) : if asc(ch)= 34 then exit while
    tk=tk+ch : p++
    IF ch = chr(10): tokerr = 2: goto tokExit : end if
    wend
    tp++ : tokList[tp]= tk :typList[tp]= tkQSTRING: tk="":ch="": p++ ' add quoted string to token list
    End if
    '-------------------------------------------------------
    If (asc(ch)>96 and asc(ch)<123) or (asc(ch)>64 and asc(ch)<91) or asc(ch)=95 ' [a-z,A-Z_]
    while (asc(ch)>96 and asc(ch)<123) or (asc(ch)>64 and asc(ch)<91) or (asc(ch)>47 and asc(ch)<58) or asc(ch)=95 ' [a-z,A-Z,0-9_]
    tk=tk+ch : p++ : ch = mid(src,p,1)
    wend
    ' ' add token ,add token type/IDENT:{VAR/COMMAND}
    tp++ : tokList[tp] = tk :typList[tp]= tkIDENT: tk="":ch=""
    End If
    '--------------------------------------------------------------
    If (asc(ch)>47 and asc(ch)<58) ' [0-9.]
    while (asc(ch)>47 AND asc(ch)<58) OR asc(ch)=46 ' [0-9[0.0]]*
    tk=tk+ch :p++ : ch = mid(src,p,1)
    wend
    ' add token ,add token type/NUMBER
    tp++ : tokList[tp] = tk : typList[tp]= tkNUMBER: tk="":ch=""
    End if
    '--------------------------------------------------------------------
    If asc(ch)=43 : tp++ : tokList[tp] = ch :typList[tp]= tkPLUS: ch="" : p++ : End if ' + plus
    If asc(ch)=45 : tp++ : tokList[tp] = ch :typList[tp]= tkMINUS: ch="" : p++ : End if ' - minus
    If asc(ch)=42 : tp++ : tokList[tp] = ch :typList[tp]= tkMULTI: ch="" : p++ : End if ' * multiply
    If asc(ch)=47 : tp++ : tokList[tp] = ch :typList[tp]= tkDIVIDE: ch="" : p++ : End if ' / divide
    If asc(ch)=40 : tp++ : tokList[tp] = ch :typList[tp]= tkLPAREN: ch="" : p++ : Lpar++ : End if ' ( Lparen
    If asc(ch)=41 : tp++ : tokList[tp] = ch :typList[tp]= tkRPAREN: ch="" : p++ : Rpar++ : End if ' ) Rparen
    If asc(ch)=44 : tp++ : tokList[tp] = ch :typList[tp]= tkCOMMA: ch="" : p++ : End if ' , comma
    If asc(ch)=58 : tp++ : tokList[tp] = ch :typList[tp]= tkCOLON: ch="" : p++ : End if ' : colon
    If asc(ch)=60 : tp++ : tokList[tp] = ch :typList[tp]= tkLESS: ch="" : p++ : End if ' < less
    If asc(ch)=61 : tp++ : tokList[tp] = ch :typList[tp]= tkEQUAL: ch="" : p++ : End if ' = equal
    If asc(ch)=62 : tp++ : tokList[tp] = ch :typList[tp]= tkMORE: ch="" : p++ : End if ' > more(greater)
    If asc(ch)=91 : tp++ : tokList[tp] = ch :typList[tp]= tkLBRACKET:ch="" : p++ : Lbrk++ :End if ' [ Lbracket
    If asc(ch)=93 : tp++ : tokList[tp] = ch :typList[tp]= tkRBRACKET:ch="" : p++ : Rbrk++ :End if ' ] Rbracket
    If asc(ch)=38 : tp++ : tokList[tp] = ch :typList[tp]= tkAND: ch="" : p++ : End if ' & AND
    If asc(ch)=124 :tp++ : tokList[tp] = ch :typList[tp]= tkOR: ch="" : p++ : End if ' | OR
    If asc(ch)=33 : tp++ : tokList[tp] = ch :typList[tp]= tkNOT: ch="" : p++ : End if ' ! NOT
    If asc(ch)=35 : tp++ : tokList[tp] = ch :typList[tp]= tkHASH: ch="" : p++ : End if ' # hash
    If asc(ch)=36 : tp++ : tokList[tp] = ch :typList[tp]= tkSSTR: ch="" : p++ : End if ' $ $TRING
    If asc(ch)=37 : tp++ : tokList[tp] = ch :typList[tp]= tkMOD : ch="" : p++ : End if ' % percent/MOD

    IF ASC(ch)>125 : tokerr = 1 : goto tokExit : END IF '.............................................................................................
    endLoop:
    wend
    Return tp
    tokExit:
    IF tokerr > 0
    if tokerr = 1: MsgBox "Unknown token!-[ " + ch +" ] at LINE: " + str(lineCount),"T:Error" : end if
    if tokerr = 2: MsgBox "Unclosed Quote!- at LINE: " + str(lineCount),"T:Error" : end if
    if tokerr = 3: MsgBox "Missing right paren! ((...)- at LINE: " + str(lineCount),"T:Error" : end if
    if tokerr = 4: MsgBox "Missing left paren! (...))- at LINE: " + str(lineCount),"T:Error" : end if
    if tokerr = 5: MsgBox "Missing right bracket!- at LINE: " + str(lineCount),"T:Error" : end if
    if tokerr = 6: MsgBox "Missing left bracket!- at LINE: " + str(lineCount),"T:Error" : end if
    Return 0
    END IF

    END SUB

    /*'call tokenizer..tested(ident,numbers) /////////////////////////////////
    int tn: tn = tokenizer(code)
    */
    'if tn=0 then goto ExitProgram >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    SUB run_tokenizer(s as string ) as INT

    tn = tokenizer(s)
    If tokerr > 0
    if tn = 0 then goto ExitTokenizer
    End if

    print "Number of tokens: " + str(tn) + crlf + "Number of lines: " + str(lineCount): nTokens = tn
    For n = 1 to tn : bf = bf + tokList[n] + crlf : Next n
    MsgBox bf,"Token List:" ' show token list
    return 1 ' if OK return 1
    ExitTokenizer:
    MsgBox "EXIT from TOKENIZER" ,"Process Terminated!"
    return 0
    END SUB

    IF codeLen>0
    ExitProgram:
    MsgBox "EXIT..." ,"Program Terminated!" : tn = 0
    END IF

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)