unofficial mirror of guile-devel@gnu.org 
 help / color / mirror / Atom feed
* our benchmark-suite
@ 2012-04-23  9:22 Andy Wingo
  2012-04-24  8:26 ` Andy Wingo
  2012-04-25 20:39 ` Ludovic Courtès
  0 siblings, 2 replies; 11+ messages in thread
From: Andy Wingo @ 2012-04-23  9:22 UTC (permalink / raw)
  To: guile-devel

Hi,

I was going to try to optimize vhash-assoc, but I wanted a good
benchmark first, so I started to look at our benchmark suite.  We have
some issues to deal with.

For those of you who are not familiar with the benchmark suite, we have
a bunch of benchmarks in benchmark-suite/benchmarks/: those files that
end in ".bm".  The format of a .bm file is like our .test files, except
that instead of `pass-if' and the like, we have `benchmark'.  You run
benchmarks via ./benchmark-guile in the $top_builddir.

The benchmarking framework tries to be appropriate for microbenchmarks,
as the `benchmark' form includes a suggested number of iterations.
Ideally when you create a benchmark, you give it a number of iterations
that makes it run approximately as long as the other benchmarks.

When the benchmarking suite was first made, 10 years ago, there was an
empty "reference" benchmark that was created to run for approximately 1
second.  Currently it runs in 0.012 seconds.  This is one problem: the
overall suite has old iteration counts.  There is a facility for scaling
the iteration counts of the suite as a whole, but it is unused.

Another problem is that the actual runtime of the various benchmarks
varies quite a lot, from 3.3 seconds for assoc (srfi-1), to 0.012 for
if.bm.

Short runtimes magnify imprecisions in measurement.  It used to be that
the measurement function was "times", but I just changed that to the
higher-precision get-internal-real-time / get-internal-run-time.  Still,
though, there is nothing you can do for a benchmark that runs in a few
milliseconds or less.

Another big problem is that some effect-free microbenchmarks optimize
away.  For example, the computations in arithmetic.bm fold entirely.
The same goes for if.bm.  These benchmarks do not measure anything
useful.

The benchmarking suite attempts to compensate for the overhead of the
test by providing for "core time": the time taken to run a benchmark,
minus the time taken to run an empty benchmark with the same number of
iterations.  The benchmark itself is compiled as a thunk, and the
framework calls the thunk repeatedly.  In theory this sounds good.  In
practice however, for high-iteration microbenchmarks, the overhead of
the thunk call outweighs any micro-benchmark being called.

For what it's worth, the current overhead of the benchmark appears to be
about 35 microseconds per iteration, on my laptop.  If we inline the
iteration into the benchmark itself, rather than calling a thunk
repeatedly, we can bring that down to around 13 microseconds.  However
it's probably best to leave it as it is, because if we inline the loop,
it's liable to be optimized out.

So, those are the problems: benchmarks running for inappropriate,
inconsistent durations; inappropriate benchmarks; and benchmarks being
optimized out.

My proposal is to rebase the iteration count in 0-reference.bm to run
for 0.5s on some modern machine, and adjust all benchmarks to match,
removing those benchmarks that do not measure anything useful.  Finally
we should perhaps enable automatic scaling of the iteration count.  What
do folks think about that?

On the positive side, all of our benchmarks are very clear that they are
a time per number of iterations, and so this change should not affect
users that measure time per iteration.

Regards,

Andy
-- 
http://wingolog.org/



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-05-19 21:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-04-23  9:22 our benchmark-suite Andy Wingo
2012-04-24  8:26 ` Andy Wingo
2012-04-25 20:39 ` Ludovic Courtès
2012-04-28 21:09   ` Neil Jerram
2012-05-02 21:24     ` Ludovic Courtès
2012-05-04 21:43       ` Neil Jerram
2012-05-07 14:38         ` Ludovic Courtès
2012-05-15 20:48         ` Andy Wingo
2012-05-19 21:54           ` Neil Jerram
2012-05-16 17:01   ` Andy Wingo
2012-05-16 21:01     ` Ludovic Courtès

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).