unofficial mirror of guile-user@gnu.org 
 help / color / mirror / Atom feed
* "Pace is nothing without guile"
@ 2008-07-13 17:06 Neil Jerram
  2008-07-13 18:24 ` Greg Troxel
  2008-07-15 19:21 ` Ludovic Courtès
  0 siblings, 2 replies; 4+ messages in thread
From: Neil Jerram @ 2008-07-13 17:06 UTC (permalink / raw
  To: guile-devel, guile-user

... That's a comment from coverage of the current England v South
Africa cricket match
(http://uk.cricinfo.com/talk/content/current/multimedia/360921.html).

But is Guile nothing without pace?

Well obviously it isn't "nothing", but I think Guile is perceived,
among both Scheme implementations and free scripting languages, as
being a bit slow, and I think that a large part of the reason for this
is that we have no systematic benchmarking.

So this email is about systematic performance data.  I was wondering
what benchmarks we could run to get good coverage of all Guile's
function, and suddenly thought "of course, the test suite!"  The test
suite should, by definition, provide coverage of everything that we
care about.  Therefore I think that we should be able to start
collecting a lot of useful performance data by implementing a version
of "make check" that measures and stores off the time that each test
takes to run.

What I'd like input/advice on, is exactly how we store and collate
such data.  I think the system should ideally support

- arbitrary later analysis of the collected data

- correlation of the result for a specific test with the exact source
code of that test at the time it was run...

- ...and hence, being able to work out (later) that the results
changed because the content of the test changed

- anyone running the tests and uploading data, not just Guile core developers

- associating a set of results with the relevant information about the
machine that they were obtained on (CPUs, RAM) in such a way that the
information is trustable, but without invading the privacy of the
uploader.

So how do we do that?  Perhaps the test content identification could
be done by its Git (SHA-1) hash - together with the path of the repo
containing that version.  And I imagine that the form of the results
could be a file containing lines like:

("numbers.test" SHA1-HASH REPO-PATH DATE+TIME MACHINE-INFO MEASURED-DURATION)

That would allow sets of results to be concatenated for later
analysis.  But I'm not sure what the relevant MACHINE-INFO is and how
to represent that.

Any thoughts / comments / ideas?  Thanks for reading!

      Neil




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: "Pace is nothing without guile"
  2008-07-13 17:06 "Pace is nothing without guile" Neil Jerram
@ 2008-07-13 18:24 ` Greg Troxel
  2008-07-13 22:31   ` Neil Jerram
  2008-07-15 19:21 ` Ludovic Courtès
  1 sibling, 1 reply; 4+ messages in thread
From: Greg Troxel @ 2008-07-13 18:24 UTC (permalink / raw
  To: Neil Jerram; +Cc: guile-user, guile-devel

My immediate reaction is that test suites aren't good benchmarks because
we will often want to add to test suites, while changing the benchmark
invalidates previous data so we will not want to change the benchmark.

Now, if you mean to use the test suite as a collection of
micro-benchmarks, so that we just have a rule that individual tests
aren't modified without serious cause, but new ones can be added, then
that makes senes.






^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: "Pace is nothing without guile"
  2008-07-13 18:24 ` Greg Troxel
@ 2008-07-13 22:31   ` Neil Jerram
  0 siblings, 0 replies; 4+ messages in thread
From: Neil Jerram @ 2008-07-13 22:31 UTC (permalink / raw
  To: Greg Troxel; +Cc: guile-user, guile-devel

2008/7/13 Greg Troxel <gdt@ir.bbn.com>:
> My immediate reaction is that test suites aren't good benchmarks because
> we will often want to add to test suites, while changing the benchmark
> invalidates previous data so we will not want to change the benchmark.

Yes, but...

> Now, if you mean to use the test suite as a collection of
> micro-benchmarks, so that we just have a rule that individual tests
> aren't modified without serious cause, but new ones can be added, then
> that makes senes.

Yes, this is what I meant.  And I also think there will be enough test
cases not changing, over the long term, that we won't have to worry in
practice about changing a few here and there.

I've taken a better look now at the existing benchmark-suite - which
doesn't contain very many benchmarks, but does provide a sensible
discussion of timings, and useful library functions.  I don't think it
will be very hard to somehow incorporate the test-suite into the set
of benchmarks.

Regards,
          Neil




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: "Pace is nothing without guile"
  2008-07-13 17:06 "Pace is nothing without guile" Neil Jerram
  2008-07-13 18:24 ` Greg Troxel
@ 2008-07-15 19:21 ` Ludovic Courtès
  1 sibling, 0 replies; 4+ messages in thread
From: Ludovic Courtès @ 2008-07-15 19:21 UTC (permalink / raw
  To: guile-devel; +Cc: guile-user

Hi,

"Neil Jerram" <neiljerram@googlemail.com> writes:

> So this email is about systematic performance data.  I was wondering
> what benchmarks we could run to get good coverage of all Guile's
> function, and suddenly thought "of course, the test suite!"

Like Greg, I'm a bit suspicious about using the test suite as a
collection of micro-benchmarks.  Usually, micro-benchmarks aim to assess
the cost of a specific operation, which must consequently be isolated to
avoid interference with unrelated computations.

Conversely, unit tests aim to verify that certain invariants hold,
regardless of "peripheral" computations required for that verification.
For instance, a few SRFI-14 tests use `every', a few SRFI-69 tests use
`lset=', various tests use `memq', etc.  (OTOH, looking at the test
suite, I'm not sure whether these tests are exceptions.)

Other than that, my feeling is that it may be harder to analyze timings
of tests that were not written as micro-benchmarks in the first place,
since one must first determine what the test actually measures.

> - anyone running the tests and uploading data, not just Guile core developers

Although quite in fashion these days (see CDash), I'm not sure we
absolutely need such a tool.  Having *some* benchmark suite to run seems
more important as a first step.  :-)

Thanks,
Ludo'.





^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2008-07-15 19:21 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-13 17:06 "Pace is nothing without guile" Neil Jerram
2008-07-13 18:24 ` Greg Troxel
2008-07-13 22:31   ` Neil Jerram
2008-07-15 19:21 ` Ludovic Courtès

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).