From: Daniel Colascione <dancol@dancol.org>
To: Vladimir Kazanov <vekazanov@gmail.com>
Cc: Stephen Leake <stephen_leake@stephe-leake.org>, emacs-devel@gnu.org
Subject: Re: Tokenizing
Date: Mon, 22 Sep 2014 06:55:00 -0700 [thread overview]
Message-ID: <54202A34.6050206@dancol.org> (raw)
In-Reply-To: <CAAs=0-3grY7DYkQDr9vbccp2iTUzkJWnFwSmOmu0nhSVB4ZzxQ@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 5065 bytes --]
On 09/22/2014 03:21 AM, Vladimir Kazanov wrote:
> On Mon, Sep 22, 2014 at 1:01 AM, Daniel Colascione <dancol@dancol.org> wrote:
>
>> I've been working (very, very, very slowly) on similar functionality.
>> The basic idea is based on the incremental lexing algorithm that Tim A.
>> Wagner sets out in chapter 5 of his thesis [1]. The key is dynamically
>> tracking lookahead used while we generate each token. Wagner's
>> algorithm allows us to incorporate arbitrary lookahead into the
>> invalidation state, so supporting something like flex's unlimited
>> trailing context is no problem.
>>
>> The nice thing about this algorithm is that like the parser, it's an
>> online algorithm and arbitrarily restartable.
>
> I have already mentioned Wagner's paper in the previous letters.
> Actually, it is the main source of inspiration :-) But I think it is a
> bit over-complicated, and the only implementation I saw (Netbean's
> Lexer API) does not even try to implement it completely. Which is
> okay, academic papers tend to idealize things.
That Lexer is a dumb state matcher, last time I checked. So is
Eclipse's. Neither is adequate, at least not if you want to support
lexing *arbitrary* languages (e.g., Python and JavaScript) with
guaranteed correctness in the face of arbitrary buffer modification.
> You do realize that this is a client's code problem? We can only
> recommend to use this or that regex engine, or even set the lookahead
> value for various token types by hand; and the latter case would
> probably work for most real-life cases.
>
> I am not even sure that it is possible to do it the Wagner's way (have
> a real next_char() function) in Emacs. I would check Lexer API
> solution as a starting point.
Of course it's possible to implement in Emacs. Buffers are strictly more
powerful than character streams.
>
>> Where my thing departs from flex is that I want to use a regular
>> expression (in the rx sense) to describe the higher-level parsing
>> automaton instead of making mode authors fiddle with start states. This
>> way, it's easy to incorporate support for things like JavaScript's
>> regular expression syntax, in which "/" can mean one of two tokens
>> depending on the previous token.
>>
>> (Another way of dealing with lexical ambiguity is to let the lexer
>> return an arbitrary number of tokens for a given position and let the
>> GLR parser sort it out, but I'm not as happy with that solution.)
>
> I do not want to solve any concrete lexing problems. The whole point
> is about supplying a way to do it incrementally. I do not want to know
> anything about the code above or below , be it GLR/LR/flex/etc.
>
>>
>> There are two stages here: you want in *some* cases for fontification to
>> use the results of tokenization directly; in other cases, you want to
>> apply fontification rules to the result of parsing that token stream.
>> Splitting the fontification rules between terminals and non-terminals
>> this way helps us maintain rudimentary fontification even for invalid
>> buffer contents --- that is, if the user types gibberish in a C-mode
>> buffer, we want constructs that look like keywords and strings in that
>> gibberish stream to be highlighted.
>
> Yes, and this is a client's code that has to decide those things, be
> it using only the token list to do fontification or let a higher-level
> a parser do it.
Unless the parser itself is incremental, you're going to have
interactivity problems.
>>> I will definitely check it out, especially because it uses GLR(it
>>> really does?!), which can non-trivial to implement.
>>
>> Wagner's thesis contains a description of a few alternative incremental
>> GLR algorithms that look very promising.
>
> Yes, and a lot more :-) I want to concentrate on a smaller problem -
> don't feel like implementing the whole thesis right now.
>
>> I have a few extensions in mind too. It's important to be able to
>> quickly fontify a particular region of the buffer --- e.g., while scrolling.
>>
>> If we've already built a parse tree and damage part of the buffer, we
>> can repair the tree and re-fontify fairly quickly. But what if we
>> haven't parsed the whole buffer yet?
>>
>
> Nice. And I will definitely need to discuss all the optimization
> possibilities later. First, the core logic has to be implemented.
>
> Bottom line: I want to take this particular narrow problem, a few user
> code examples (for me it is a port of CPython's LL(1) parser) and see
> if I can solve in an optimal way. A working prototype will take some
> time, a month or more - I am not in a hurry.
>
> As much as I understand, you want to cooperate on it, right..?
*sigh* It sounds like you want to create something simple. You'll run
into the same problems I did, or you'll produce something less than
fully general. I don't have enough time to work on something that isn't
fully general. I'm sick of writing language-specific text parsing code.
Have fun.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
next prev parent reply other threads:[~2014-09-22 13:55 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-19 14:59 Overlay mechanic improvements Vladimir Kazanov
2014-09-19 17:22 ` Stefan Monnier
2014-09-20 13:19 ` Richard Stallman
2014-09-20 13:37 ` David Kastrup
2014-09-21 13:35 ` Richard Stallman
2014-09-21 13:52 ` David Kastrup
2014-09-21 21:48 ` Richard Stallman
2014-09-21 22:06 ` David Kastrup
2014-09-22 23:11 ` Richard Stallman
2014-09-22 23:50 ` David Kastrup
2014-09-23 19:15 ` Richard Stallman
2014-09-21 16:07 ` Stefan Monnier
2014-09-21 16:14 ` David Kastrup
2014-09-21 21:48 ` Richard Stallman
2014-09-21 22:19 ` David Kastrup
2014-09-23 19:16 ` Richard Stallman
2014-09-23 19:27 ` David Kastrup
2014-09-28 23:24 ` Richard Stallman
2014-09-29 5:45 ` David Kastrup
2014-09-29 20:48 ` Richard Stallman
2014-09-30 1:21 ` Stephen J. Turnbull
2014-09-30 8:43 ` David Kastrup
2014-09-30 10:35 ` Rasmus
2014-09-30 14:22 ` Eli Zaretskii
2014-09-30 16:20 ` David Kastrup
2014-09-30 16:35 ` Eli Zaretskii
2014-09-30 14:32 ` Stefan Monnier
2014-10-02 16:12 ` Uwe Brauer
2014-09-30 19:23 ` Richard Stallman
2014-10-01 3:38 ` Stephen J. Turnbull
2014-10-01 12:53 ` Richard Stallman
2014-10-01 13:11 ` David Kastrup
2014-10-02 1:26 ` Stephen J. Turnbull
2014-09-30 5:52 ` David Kastrup
2014-10-06 19:14 ` Richard Stallman
2014-10-06 21:02 ` David Kastrup
2014-09-21 16:56 ` Eli Zaretskii
2014-09-21 18:42 ` Stefan Monnier
2014-09-21 18:58 ` Eli Zaretskii
2014-09-21 20:12 ` Stefan Monnier
2014-09-21 21:48 ` Richard Stallman
2014-09-22 0:31 ` Stefan Monnier
2014-09-22 23:11 ` Richard Stallman
2014-09-20 15:56 ` Eli Zaretskii
2014-09-20 19:49 ` Stefan Monnier
2014-09-21 13:36 ` Richard Stallman
2014-09-19 18:03 ` Richard Stallman
2014-09-20 8:08 ` Vladimir Kazanov
2014-09-20 13:21 ` Richard Stallman
2014-09-20 16:28 ` Stephen Leake
2014-09-20 13:21 ` Tokenizing Richard Stallman
2014-09-20 16:24 ` Tokenizing Stephen Leake
2014-09-20 16:40 ` Tokenizing Vladimir Kazanov
2014-09-20 20:16 ` Tokenizing Eric Ludlam
2014-09-20 20:35 ` Tokenizing Vladimir Kazanov
2014-09-21 15:13 ` parsing (was tokenizing) Stephen Leake
2014-09-20 16:36 ` Tokenizing Vladimir Kazanov
2014-09-20 19:55 ` Tokenizing Stefan Monnier
2014-09-21 15:35 ` Tokenizing Stephen Leake
2014-09-21 16:43 ` Tokenizing Stefan Monnier
2014-09-22 14:05 ` Tokenizing Stephen Leake
2014-09-21 13:35 ` Tokenizing Richard Stallman
2014-09-21 14:24 ` Tokenizing Vladimir Kazanov
2014-09-21 15:32 ` Tokenizing Stephen Leake
2014-09-21 16:42 ` Tokenizing Stefan Monnier
2014-09-21 18:55 ` Tokenizing Vladimir Kazanov
2014-09-21 22:01 ` Tokenizing Daniel Colascione
2014-09-22 10:21 ` Tokenizing Vladimir Kazanov
2014-09-22 13:55 ` Daniel Colascione [this message]
2014-09-22 14:02 ` Tokenizing Stephen Leake
2014-09-22 14:14 ` Tokenizing Daniel Colascione
2014-09-22 13:15 ` Tokenizing Stephen Leake
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://www.gnu.org/software/emacs/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54202A34.6050206@dancol.org \
--to=dancol@dancol.org \
--cc=emacs-devel@gnu.org \
--cc=stephen_leake@stephe-leake.org \
--cc=vekazanov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.savannah.gnu.org/cgit/emacs.git
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).