* expansion, memoization, and evaluation...
@ 2002-12-04 2:41 Rob Browning
2002-12-04 2:57 ` Mikael Djurfeldt
0 siblings, 1 reply; 13+ messages in thread
From: Rob Browning @ 2002-12-04 2:41 UTC (permalink / raw)
I thought it might be worthwhile if I made my support of Dirk's recent
work, and my current feelings about the related issues clear, though
I'm certainly ready and willing to change my position(s) if
appropriate.
ATM I'd really like to see guile make a clean separation between
stages of evaluation. Dirk has suggested perhaps four stages:
expansion (scheme->intermediate code)
compilation (intermediate->intermediate)
memoization-or-compilation (intermediate->{memoized,compiled})
execution
That arrangement seems like a pretty good initial goal, though I
realize that it may or may not end up being the "final arrangement".
ISTR that some people were concerned that having a separate expansion
stage might cause a performance loss. My current feeling is that
- if the performance hit is minimal, and will go away when we can
write/read pre-expanded (perhaps our first version of .scmc files
:>) code, then I'd be tempted to just ignore the loss. I feel
like the increased clarity of the process, and the potential for
optimizations/plugging in other compilers, etc. will be likely to
outweigh the loss.
Also note that if we have a point where it's easy to get access to
the post-expansion, but pre-memoization code, it becomes *much*
easier to add strong, offline compilation to guile. As an
example, my impression is that one of hobbit's biggest issues has
been dealing with macros (define-macro vs defmacro vs syncase).
If hobbit can be handed the pre-expanded code, it can completely
ignore macros.
- if the performance hit is not minimal, but if it's not all that
hard to add a #define SCM_I_BUILD_WITH_SEPARATE_EXPANSION_STEP,
then perhaps that would be a good approach for the short term --
you'd only enable that option if you were experimenting, if you
were a guile offline compiler, or if you had finally finished a
compiler whose performance improvements dwarfed the "separate step
performance loss".
Ideally we'll pick up more than enough performance improvements
elsewhere, given a cleaner infrastructure for people to hack on
(i.e. one that's approachable by more people) to outweigh the
performance loss that having separate evaluation stages might entail.
Another thing I'd like to suggest is that when considering things like
whether or not we should have unmemoization, first-class macros, etc.,
we consider how these things might affect an offline compiler. If
nothing else, we may not want to structure guile in such a way that we
provide mechanisms that preclude the code from ever being able to be
compiled offline. Part of the answer is to use eval-when (or
eval-case) appropriately.
Also, though we can structure guile's macros (and other things) to be
arbitrarily dynamic, that doesn't mean we should. Aside from the
performance costs involved, I feel like we ought to keep an eye on how
our choices affect both the comprehensibility of our implementation
and the scheme code that the end-user may write.
WRT macros, my general impression is that unless you are very clear
about the semantics of your macro system and your evaluation process,
and unless you're reasonably strict in what you allow, you're likely
to preclude much serious compilation because the compiler won't have
the assurances it needs in order to be able to make many substantial
optimizations. i.e. if you're not careful you can end up with the
compiler having to just convert
(foo bar baz)
into
(eval '(foo bar baz))
or similar, far more often than you'd like, because the compiler can't
be "sure enough" about foo, bar, or baz.
--
Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592 F9A0 25C8 D377 8C7E 73A4
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 2:41 expansion, memoization, and evaluation Rob Browning
@ 2002-12-04 2:57 ` Mikael Djurfeldt
2002-12-04 3:10 ` Rob Browning
2002-12-04 8:09 ` klaus schilling
0 siblings, 2 replies; 13+ messages in thread
From: Mikael Djurfeldt @ 2002-12-04 2:57 UTC (permalink / raw)
Cc: guile-devel, Rob Browning
Rob Browning <rlb@defaultvalue.org> writes:
> Another thing I'd like to suggest is that when considering things like
> whether or not we should have unmemoization, first-class macros, etc.,
> we consider how these things might affect an offline compiler.
Oops... This reminds me of another consideration I had when opting to
work on Scheme source: While methods are normally optimized at
generic application time, goops source can be compiled offline.
If the optimizer does source --> source transformation it's reasonably
easy to use it together with an offline compiler. It's more difficult
to explain the memoized code to the compiler...
M
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 2:57 ` Mikael Djurfeldt
@ 2002-12-04 3:10 ` Rob Browning
2002-12-04 3:31 ` Mikael Djurfeldt
2002-12-04 8:09 ` klaus schilling
1 sibling, 1 reply; 13+ messages in thread
From: Rob Browning @ 2002-12-04 3:10 UTC (permalink / raw)
Cc: Dirk Herrmann, guile-devel
Mikael Djurfeldt <mdj@kvast.blakulla.net> writes:
> Oops... This reminds me of another consideration I had when opting to
> work on Scheme source: While methods are normally optimized at
> generic application time, goops source can be compiled offline.
>
> If the optimizer does source --> source transformation it's reasonably
> easy to use it together with an offline compiler. It's more difficult
> to explain the memoized code to the compiler...
OK, I'm confused (and I'm pretty sure most of the difficulty is on my
end :>). I'm not completely familiar with how things work now, so
could you explain a bit if you have time?
In the above, am I right in presuming that by "work on Scheme source",
you're referring to the way your goops code uses the combination of
the scheme source and an envt representation during the process (that
I don't yet know a lot about) of optimizing an invocation?
Also in the above, when you say "optimizer does source -> source
transformation", which optimizer are you referring to, and more
generally, how would the offline compilation process go in your
thinking?
scm-sexp -> expanded-sexp -> goops-optimized-sexp -> .o file?
or does the goops optimizer have to work in the dynamic envt at
runtime? If so, is there a way we can build a goops optimizer that's
more efficient than just falling back on eval?
--
Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592 F9A0 25C8 D377 8C7E 73A4
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 3:10 ` Rob Browning
@ 2002-12-04 3:31 ` Mikael Djurfeldt
2002-12-04 4:07 ` Rob Browning
0 siblings, 1 reply; 13+ messages in thread
From: Mikael Djurfeldt @ 2002-12-04 3:31 UTC (permalink / raw)
Cc: djurfeldt, Dirk Herrmann, guile-devel
The important goops optimizations are made based on type information.
In the on-line (interpreter) case the types are retrieved from the
arguments and the rewrite rules depend on knowing the bindings of
variables in the source. Yes, this is equivalent to what the current
goops source does, although the only optimization which is done
currently is supplying a "next-method" efficiently.
In the off-line case the types would need to be supplied by
flow-analysis in the compiler. This means that just as the optimizer
needs to be folded into evaluation in the on-line case, the optimizer
needs to be folded into compilation in the off-line case. That is,
the compiler needs to supply the optimizer with something equivalent
to what compile-method now gets from procedure-environment.
Does this answer your questions?
Best regards,
Mikael
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 3:31 ` Mikael Djurfeldt
@ 2002-12-04 4:07 ` Rob Browning
2002-12-04 7:07 ` Mikael Djurfeldt
0 siblings, 1 reply; 13+ messages in thread
From: Rob Browning @ 2002-12-04 4:07 UTC (permalink / raw)
Cc: Dirk Herrmann, guile-devel
Mikael Djurfeldt <mdj@kvast.blakulla.net> writes:
> In the on-line (interpreter) case the types are retrieved from the
> arguments and the rewrite rules depend on knowing the bindings of
> variables in the source. Yes, this is equivalent to what the current
> goops source does, although the only optimization which is done
> currently is supplying a "next-method" efficiently.
You may have already said this, but if the method is called later with
"different types", then does it have to notice that and recompute?
> In the off-line case the types would need to be supplied by
> flow-analysis in the compiler. This means that just as the
> optimizer needs to be folded into evaluation in the on-line case,
> the optimizer needs to be folded into compilation in the off-line
> case. That is, the compiler needs to supply the optimizer with
> something equivalent to what compile-method now gets from
> procedure-environment.
Ahh. Flow-analysis would be great, though I'm not sure we'd be likely
to have it immediately. Any chance some alternate optimization might
be easier when you're doing offline compilation? Unfortunately I
don't know enough about what goops is already doing to comment very
concretely yet, but I can imagine that you might be able to get
similar performance with an alternate approach when you can control
the object code you're emitting.
> Does this answer your questions?
I think so, yes, thanks.
--
Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592 F9A0 25C8 D377 8C7E 73A4
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 4:07 ` Rob Browning
@ 2002-12-04 7:07 ` Mikael Djurfeldt
2002-12-04 21:11 ` Rob Browning
0 siblings, 1 reply; 13+ messages in thread
From: Mikael Djurfeldt @ 2002-12-04 7:07 UTC (permalink / raw)
Cc: djurfeldt, Dirk Herrmann, guile-devel
Rob Browning <rlb@defaultvalue.org> writes:
> Mikael Djurfeldt <mdj@kvast.blakulla.net> writes:
>
>> In the on-line (interpreter) case the types are retrieved from the
>> arguments and the rewrite rules depend on knowing the bindings of
>> variables in the source. Yes, this is equivalent to what the current
>> goops source does, although the only optimization which is done
>> currently is supplying a "next-method" efficiently.
>
> You may have already said this, but if the method is called later with
> "different types", then does it have to notice that and recompute?
No, that copy of the compiled code will never be called with anything
but the types it's compiled for.
> Any chance some alternate optimization might be easier when you're
> doing offline compilation? Unfortunately I don't know enough about
> what goops is already doing to comment very concretely yet, but I
> can imagine that you might be able to get similar performance with
> an alternate approach when you can control the object code you're
> emitting.
Hmm... What do you mean by "control the object code"? Surely, there
is nothing about what I've said about goops which prevents the
optimizations in the "alternate approach" from being done? Maybe
there's a misunderstanding here: Goops gives source back to the
compiler. The compiler then can continue to do whatever optimizations
it chooses to, and also has full control over the object code it's
emitting.
Best regards,
Mikael
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 2:57 ` Mikael Djurfeldt
2002-12-04 3:10 ` Rob Browning
@ 2002-12-04 8:09 ` klaus schilling
2002-12-04 10:55 ` Mikael Djurfeldt
1 sibling, 1 reply; 13+ messages in thread
From: klaus schilling @ 2002-12-04 8:09 UTC (permalink / raw)
Cc: Dirk Herrmann, guile-devel, Rob Browning
Mikael Djurfeldt writes:
> If the optimizer does source --> source transformation it's reasonably
> easy to use it together with an offline compiler. It's more difficult
> to explain the memoized code to the compiler...
How is code generated at run-time evaluated by eval , eval-string,
or local-eval handled by the optimizer?
Klaus Schilling
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 8:09 ` klaus schilling
@ 2002-12-04 10:55 ` Mikael Djurfeldt
0 siblings, 0 replies; 13+ messages in thread
From: Mikael Djurfeldt @ 2002-12-04 10:55 UTC (permalink / raw)
Cc: djurfeldt, Dirk Herrmann, guile-devel, Rob Browning
klaus schilling <pessy@chez.com> writes:
> Mikael Djurfeldt writes:
> > If the optimizer does source --> source transformation it's reasonably
> > easy to use it together with an offline compiler. It's more difficult
> > to explain the memoized code to the compiler...
>
> How is code generated at run-time evaluated by eval , eval-string,
> or local-eval handled by the optimizer?
Clarification: The particular optimizer we're talking about here sits
just before method code is run for the first time, that is as a part
of the application of a generic method. It is supposed to mainly do
goops-specific optimizations.
Answer: The only way for eval, eval-string or local-eval to get to
code generated by the optimizer is by invoking a generic function.
This GF does type dispatch on its arguments. If the offline compiler,
through code analysis, has concluded that code will be needed for a
certain combination of arguments, the GF can select precompiled code.
If not, there are different possible alternatives to choose from:
Alt 1: Handle it as in the interpreter.
Alt 2: Same as alt 1, but invoke the compiler dynamically on the
output of the optimizer.
Alt 3: Invoke an unoptimized version of the method (with all internal
type dispatch intact).
Alt 3 is the "vanilla" way to do compilation. You see, I'm not
talking about peculiarities of goops putting constraints on how you
can do compilation. You can compile things standardly alright.
Rather, it's about novel opportunities.
M
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 7:07 ` Mikael Djurfeldt
@ 2002-12-04 21:11 ` Rob Browning
2002-12-04 21:47 ` Mikael Djurfeldt
0 siblings, 1 reply; 13+ messages in thread
From: Rob Browning @ 2002-12-04 21:11 UTC (permalink / raw)
Cc: Dirk Herrmann, guile-devel
Mikael Djurfeldt <mdj@kvast.blakulla.net> writes:
>> You may have already said this, but if the method is called later with
>> "different types", then does it have to notice that and recompute?
>
> No, that copy of the compiled code will never be called with anything
> but the types it's compiled for.
OK, so does that mean that at each invocation, you need to look at the
incoming types and check to see if you already have a cached method
that matches the incoming signature? i.e. if you have
(blah)
(foo bar baz)
(blarg)
and foo is a generic function, and last time through, bar and baz were
integers, but this time bar and baz are strings. Would the current
behavior be for goops to check, notice this, and build a new
"precompiled" invocation for two strings? (Just trying to check to
see that I understand...)
> Hmm... What do you mean by "control the object code"? Surely, there
> is nothing about what I've said about goops which prevents the
> optimizations in the "alternate approach" from being done?
Well as yet I don't have a clear idea in mind, and in fact a number of
the optimizations I've thought of would require flow and scope
analysis. To some extent I'm just speculating about possibilities,
inspired by clever (non-goops-specific) hacks that can be possible
when you know enough about a closed region of source. For example, if
you know that within a given function (or closed set of functions) you
use some set of symbols, and within the set you have big (case foo
...) statements using those symbols, you may be able to compile the
object code to use plain integers to represent these symbols and then
issue c-style switches to handle the case statements. Alternately you
might be able to use a "small consecutive integers" numbering scheme
to represent the integers and then per-case vector jump tables with
those integers as indices for the case statements. Either way should
beat the much more naive O(N) approach:
if (SCM_EQ_P (foo, x_sym)) { ... }
else if (SCM_EQ_P (foo, y_sym)) { ... }
...
> Maybe there's a misunderstanding here: Goops gives source back to
> the compiler. The compiler then can continue to do whatever
> optimizations it chooses to, and also has full control over the
> object code it's emitting.
That makes sense. The reason I was confused was because it sounded
like goops was making decisions based on the runtime types of
arguments, and if so, and if you were doing compilation offline, then
you wouldn't have access to that information. Your comment about
possibly having to use type flow analysis for offline compilation
cleared that up for me.
(Of course if the guile compiler were implemented targeting C, and if
guile were to "Depends: gcc", we might be able to use dlopen/dlsym to
support heavyweight online compilation. Though first-time execution
would be awfully painful unless your machine was really fast ;>)
--
Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592 F9A0 25C8 D377 8C7E 73A4
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 21:11 ` Rob Browning
@ 2002-12-04 21:47 ` Mikael Djurfeldt
2002-12-05 0:07 ` Rob Browning
0 siblings, 1 reply; 13+ messages in thread
From: Mikael Djurfeldt @ 2002-12-04 21:47 UTC (permalink / raw)
Cc: djurfeldt, Dirk Herrmann, guile-devel
Rob Browning <rlb@defaultvalue.org> writes:
> OK, so does that mean that at each invocation, you need to look at the
> incoming types and check to see if you already have a cached method
> that matches the incoming signature? i.e. if you have
>
> (blah)
> (foo bar baz)
> (blarg)
>
> and foo is a generic function, and last time through, bar and baz were
> integers, but this time bar and baz are strings. Would the current
> behavior be for goops to check, notice this, and build a new
> "precompiled" invocation for two strings?
Yes.
> For example, if you know that within a given function (or closed set
> of functions) you use some set of symbols, and within the set you
> have big (case foo ...) statements using those symbols, you may be
> able to compile the object code to use plain integers to represent
> these symbols and then issue c-style switches to handle the case
> statements.
Yes, a wonderful optimization. :-)
And if the compiler gets source back from the goops optimizer or does
flow analysis, it may know the types of some arguments and some
expressions in the source, and might be able to use native integer or
double representation. In this context it's nice that a goops
"cmethod" (the result of compile-method) has a fixed type signature.
> (Of course if the guile compiler were implemented targeting C, and if
> guile were to "Depends: gcc", we might be able to use dlopen/dlsym to
> support heavyweight online compilation.
Yes, yes, yes!
> Though first-time execution would be awfully painful unless your
> machine was really fast ;>)
I wouldn't be so sure. Of course we wouldn't get "real-time"
performance but I think the performance of the current goops is
promising considering what *it* does for each "first" invocation:
Traversing and rewriting the entire method source and rebuilding the
entire GF method cache up to eight times... and all of it done on the
Scheme level.
Maybe I shouldn't reveal this :), but I've seen the current goops as a
large-scale experiment to test whether these crazy ideas really work
in practise. And they actually seem to. I've made heavy use of goops
in large systems consisting of maybe 50 modules, and I don't have any
complaints on its performance.
Best regards,
Mikael
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-04 21:47 ` Mikael Djurfeldt
@ 2002-12-05 0:07 ` Rob Browning
2002-12-05 16:27 ` Marius Vollmer
0 siblings, 1 reply; 13+ messages in thread
From: Rob Browning @ 2002-12-05 0:07 UTC (permalink / raw)
Cc: Dirk Herrmann, guile-devel
Mikael Djurfeldt <mdj@kvast.blakulla.net> writes:
>> and foo is a generic function, and last time through, bar and baz were
>> integers, but this time bar and baz are strings. Would the current
>> behavior be for goops to check, notice this, and build a new
>> "precompiled" invocation for two strings?
>
> Yes.
OK. Now I think I understand.
>> (Of course if the guile compiler were implemented targeting C, and if
>> guile were to "Depends: gcc", we might be able to use dlopen/dlsym to
>> support heavyweight online compilation.
>
> Yes, yes, yes!
FWIW I've actually been playing around with ksi, gcc, and the gcc
front-end/back-end stuff more, and there are some interesting
possibilities. Imagine if you created a gcc front-end using guile
that linked any resulting binaries against libguile. It seems like
this would mean you could take a very lazy approach to improving your
compiler. At first, you might not do much better than guile does now,
just replacing (+ x y) with something like the following (I'm using
ksi-like syntax below, but ksi is just a thin wrapper around the
native "tree" structure used by all the front ends as input to gcc's
code generator):
(call scm_add (ref x_4432) (ref y_2231))
but later, as we get smarter about flow analysis, etc. we might be
able in some cases to generate:
(plus (ref x_4432) (ref y_2231))
which would be *way* faster.
(Note that no one should panic -- I'm not about to advocate we jump in
this direction right now -- I'm just playing around to see what's
possible).
One thing I'm not clear on at the moment -- the newer gcc's support
-foptimize-sibling-calls, which appears to work even for mutually
recursive functions, but I was wondering if there was any chance this
could work *across* .o files, or if it only worked within the same
object file. Any gcc gurus about?
I'm presuming cross-boundary optimized tail calls would likely require
non-standard C calling conventions, and AFAIK gcc only supports one
calling convention for external functions. Ideally, to be able to
generate *really* fast code across .o boundaries, we'd want to be able
to generate external function references that use a calling convention
that's tail-call friendly, and I'm not sure that's possible yet, or
even planned.
One other interesting possibility for a guile compiler as a gcc front
end would be the possibility of either embedding a copy of gcc's C
parser at guile compiler build time (or perhaps just adding hooks into
the existing gcc parser if the upstream were amenable) so that we can
do *real* C code preprocessing -- i.e. automatically extract C
function signatures for wrapper generation at compile time, add
precice gc annotations, or whatever (i.e. perhaps some of the
facinating stuff Tom Lord has suggested).
(Hmm I may have already mentioned some of this stuff -- I can't
remember whether I that was here or in other private conversations
:/)
> Maybe I shouldn't reveal this :), but I've seen the current goops as a
> large-scale experiment to test whether these crazy ideas really work
> in practise. And they actually seem to. I've made heavy use of goops
> in large systems consisting of maybe 50 modules, and I don't have any
> complaints on its performance.
Interesting.
--
Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592 F9A0 25C8 D377 8C7E 73A4
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-05 0:07 ` Rob Browning
@ 2002-12-05 16:27 ` Marius Vollmer
2002-12-05 17:07 ` Rob Browning
0 siblings, 1 reply; 13+ messages in thread
From: Marius Vollmer @ 2002-12-05 16:27 UTC (permalink / raw)
Cc: djurfeldt, Dirk Herrmann, guile-devel
Rob Browning <rlb@defaultvalue.org> writes:
> (call scm_add (ref x_4432) (ref y_2231))
>
> but later, as we get smarter about flow analysis, etc. we might be
> able in some cases to generate:
>
> (plus (ref x_4432) (ref y_2231))
>
> which would be *way* faster.
Just some random thoughts since I have done this in guile-lightning:
we should definitely inline fixnum arithmetic and call out of line
code only for non-fixnums or overflows. That gives a big improvement
over just calling scm_sum all the time.
--
GPG: D5D4E405 - 2F9B BCCC 8527 692A 04E3 331E FAF8 226A D5D4 E405
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: expansion, memoization, and evaluation...
2002-12-05 16:27 ` Marius Vollmer
@ 2002-12-05 17:07 ` Rob Browning
0 siblings, 0 replies; 13+ messages in thread
From: Rob Browning @ 2002-12-05 17:07 UTC (permalink / raw)
Cc: djurfeldt, Dirk Herrmann, guile-devel
Marius Vollmer <mvo@zagadka.ping.de> writes:
> Just some random thoughts since I have done this in guile-lightning:
> we should definitely inline fixnum arithmetic and call out of line
> code only for non-fixnums or overflows. That gives a big
> improvement over just calling scm_sum all the time.
Makes sense. One nice thing about guile-lightning, and something I
haven't seen in gcc's backend interface is a command to branch on
overflow. On platforms that support it, that could be really
helpful...
--
Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592 F9A0 25C8 D377 8C7E 73A4
_______________________________________________
Guile-devel mailing list
Guile-devel@gnu.org
http://mail.gnu.org/mailman/listinfo/guile-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2002-12-05 17:07 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-12-04 2:41 expansion, memoization, and evaluation Rob Browning
2002-12-04 2:57 ` Mikael Djurfeldt
2002-12-04 3:10 ` Rob Browning
2002-12-04 3:31 ` Mikael Djurfeldt
2002-12-04 4:07 ` Rob Browning
2002-12-04 7:07 ` Mikael Djurfeldt
2002-12-04 21:11 ` Rob Browning
2002-12-04 21:47 ` Mikael Djurfeldt
2002-12-05 0:07 ` Rob Browning
2002-12-05 16:27 ` Marius Vollmer
2002-12-05 17:07 ` Rob Browning
2002-12-04 8:09 ` klaus schilling
2002-12-04 10:55 ` Mikael Djurfeldt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).