unofficial mirror of guile-devel@gnu.org 
 help / color / mirror / Atom feed
From: stefan <stefan.tampe@spray.se>
To: guile-devel@gnu.org
Subject: Re: fmatch
Date: Fri, 7 May 2010 22:53:05 +0200	[thread overview]
Message-ID: <201005072253.05098.stefan.tampe@spray.se> (raw)
In-Reply-To: <8739y3lav2.fsf@gnu.org>

> > The core of the matcher is a loop with a set of if's arranged so that in
> > the mean it will do 2-3 checks and then find the instruction, so no named
> > labels.
> 
> You mean no “labels as values” (info "(gcc) Labels as Values"), right?

Yep!
 
> Hmmmm.  My first reaction is that I’d rather avoid complex VM
> instructions like this and instead focus on native compilation (AOT or
> JIT) when we feel like improving performance.
> 
> What do you think?

Well, I think that for plain pattern matching, a sane compilation is the way 
to go in the long run. For unification I'm not sure. Code generated from 
unwinding unification code typically increases geometrically with the number 
of words and I'm not sure this is needed. I think that it is hard to beat a 
tiny vm that does this well in c. In a way this is a clean solution. Think
of debugging the generated code for errors, you will need special hacks to 
the scheme in order to make this workable and there is a lot of similar 
overhead associated with bloated code. So my thinking is that for some time 
we will have a need for this tiny vm as one extra instruction, although a 
complex one. So why not use it for plain matching as well. At some point
we could write this tiny vm in Scheme :-). Implementing it all in scheme 
today would mean that I must work with a slow matcher which is not ideal

But to judge what to do. how far are we from a scheme compiler?

> [...]
> 
> > I also have similar code for usage in prolog like matching e.g.
> > unfication + backtracking I would expect at least the same speedup here.
> > Note that for this application a pure unwinding of the logic, similar to
> > the match macro will yield very bloated code and going for a targeted
> > virtual machine will make
> > for lean and efficient code.
> >
> > I hope that in the end a prolog implementation using this approach would
> > be 3-4
> > times slower that what you have with a more targeted platform like gnu
> > prolog.
> 
> This all sounds like exciting work!

I have a matcher right know for the einstin case and it takes 90ms compared to
16-20 for gnu prolog on my machine. (The same method is used) I think 
that if I can dispatch more cleanly I could reduce it even further.

the code for the member function currently looks like,

(def gp-member
     ( X (+ (X . L)) F     (if (F)
			       #t
			       (begin (gp-unwind *gp-fi*) (gp-next)) ))
     ( X (+ (Y . L)) F     (gp-member X L F)                )
     ( X (+ ())      F     #f                               ))

Here the + directive extension is used to indicate a uniying match.
 *gp-fi* is implicitly bounded to a frame so that we can undo any variable 
settings. I find it very nice to work with pattern matchers that can 
handle both ordinary pattern matching and unifying pattern matching so it is
quite usefull to bundle these two things together.


 
> > So in a sense this is a test of what you want. Compare the speed
> > difference of using c-based or scheme based solution. Still I expect the
> > macth construct to be easier to hack and improve uppon and at some point
> > a direct match version will win because you can compile it naitively.
> 
> Well, until 1.8, the focus in Guile had been to write the “hot spots” in
> C, for performance.  However, that doesn’t scale well and leads to a
> hard to maintain code base (we want to write Scheme in the first place,
> not C).

I Agree with this experience.

> For 2.2 and beyond, I really think the focus should be on allowing hot
> spots to be written in Scheme, which means compiling Scheme code
> natively.  This would be beneficial to all Scheme code, not just this
> specific pattern matching construct.

This is clearly a good move. Hmm Ok, I see your point here. I could write 
the whole stuff out in scheme directly. Hmm it would still be nice to have
an implemenation in C and compare with what you get when introducing this 
code. Also one should focus on stuff in the right order. So if I spend the 
next 
two weeks writing a small prolog implementaion. Should we wait untill after 
2.2 to get the suggested speed and live with 15x performance hit? It is 
tempting to deliver that system and then spend the next years to shoot it 
down into pure scheme. 

Also I use this way of programming alot. It would be cool to have a fast 
implementaion at the desk within a short timeframe.

At least it is a fun hack!

/Stefan












  reply	other threads:[~2010-05-07 20:53 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-06 20:39 fmatch stefan
2010-05-07 11:59 ` fmatch Ludovic Courtès
2010-05-07 14:24   ` fmatch stefan
2010-05-07 20:23     ` fmatch Ludovic Courtès
2010-05-07 20:53       ` stefan [this message]
2010-05-09 20:57         ` fmatch Ludovic Courtès
2010-05-09 20:52           ` fmatch stefan
2010-05-10  8:26             ` fmatch Ludovic Courtès
2010-05-11 14:26               ` fmatch Stefan
2010-05-17 20:08               ` fmatch stefan
2010-05-22 21:03                 ` fmatch Ludovic Courtès
2010-05-22 21:31                   ` fmatch stefan
2010-05-23 16:06                     ` fmatch Ludovic Courtès
2010-05-23 15:47                   ` fmatch stefan
2010-05-24 20:08                     ` fmatch Ludovic Courtès
2010-05-24 21:05                       ` fmatch stefan
2010-05-25 17:41                         ` fmatch Ludovic Courtès
2010-05-25 21:10                           ` fmatch stefan
2010-06-16 21:31                             ` fmatch Ludovic Courtès
2010-06-20 19:58                               ` fmatch stefan
2010-06-20 21:56                                 ` fmatch Ludovic Courtès

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/guile/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201005072253.05098.stefan.tampe@spray.se \
    --to=stefan.tampe@spray.se \
    --cc=guile-devel@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).