From mboxrd@z Thu Jan 1 00:00:00 1970 Path: main.gmane.org!not-for-mail From: Dirk Herrmann Newsgroups: gmane.lisp.guile.user Subject: benchmarking framework Date: Sat, 20 Jul 2002 04:13:30 +0200 (CEST) Sender: guile-user-admin@gnu.org Message-ID: <15172.7958968424$1027131290@news.gmane.org> NNTP-Posting-Host: localhost.gmane.org Mime-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Trace: main.gmane.org 1027131290 20800 127.0.0.1 (20 Jul 2002 02:14:50 GMT) X-Complaints-To: usenet@main.gmane.org NNTP-Posting-Date: Sat, 20 Jul 2002 02:14:50 +0000 (UTC) Return-path: Original-Received: from fencepost.gnu.org ([199.232.76.164]) by main.gmane.org with esmtp (Exim 3.33 #1 (Debian)) id 17Vjlh-0005PN-00 for ; Sat, 20 Jul 2002 04:14:49 +0200 Original-Received: from localhost ([127.0.0.1] helo=fencepost.gnu.org) by fencepost.gnu.org with esmtp (Exim 3.35 #1 (Debian)) id 17VjlB-0008ST-00; Fri, 19 Jul 2002 22:14:17 -0400 Original-Received: from sallust.ida.ing.tu-bs.de ([134.169.132.52]) by fencepost.gnu.org with esmtp (Exim 3.35 #1 (Debian)) id 17VjkX-0008S2-00; Fri, 19 Jul 2002 22:13:38 -0400 Original-Received: from localhost (dirk@localhost) by sallust.ida.ing.tu-bs.de (8.9.3+Sun/8.9.1) with ESMTP id EAA05686; Sat, 20 Jul 2002 04:13:34 +0200 (CEST) Original-To: guile-devel@gnu.org, guile-user@gnu.org Errors-To: guile-user-admin@gnu.org X-BeenThere: guile-user@gnu.org X-Mailman-Version: 2.0.11 Precedence: bulk List-Help: List-Post: List-Subscribe: , List-Id: General Guile related discussions List-Unsubscribe: , List-Archive: Xref: main.gmane.org gmane.lisp.guile.user:756 X-Report-Spam: http://spam.gmane.org/gmane.lisp.guile.user:756 Hi folks, I have taken the freedom to add a benchmarking framework to guile. It is a simple adaptation from the testing framework. A lot of code has been copied from there. Well, maybe at some time it would be better to extract the common parts into a module. Whatever, here's a short introduction: benchmarks are placed in the directory benchmark-suite/benchmarks. They are ordinary scheme files, but with the .bm extension. At least these files are detected automatically. Others can also be specified explicitly. Within the benchmark files, you can introduce benchmarks with the (benchmark ...) macro. The syntax is as follows: (benchmark NAME ITERATION body...) where NAME is a name for the benchmark, similar to the names in the test-suite (btw, you also have with-benchmark-prefix...), ITERATION is the number of times the body shall be executed. And the body finally is the code to be benchmarked. Example: (define bignum (1- (expt 2 128))) (let* ((i 0)) (benchmark "bignum" 130000 (logand i bignum) (set! i (+ i 1)))) This will run the statements (logand i bignum) and (set! i (+ i 1)) 130000 times (this, however, can be scaled - see below). The time for this is measured and written out. The execution counts in the small examples are chosen such that on my machine this results in an execution time of about one second for each benchmark. You start the benchmarking with the command ./benchmark-guile. You have the same options as with ./check-guile, except for --flag-unresolved, which is test specific, and --test-suite is renamed to --benchmark-suite. There is an additional option --iteration-factor NUM, which allows to scale the execution time for benchmarks: As can be seen from the exmample above, every benchmark is given an iteration count, which indicates how often the benchmark is to be executed. With the option --iteration-factor NUM you can increase or decrease the execution count of the benchmarks and thus influence the time needed for performing the benchmarks. For example, running the benchmark suite with --iteration-factor 0.5 will about halven the execution time, since all benchmark's bodies are executed about half as often. Results are written to a log file (this file contains a lot of data) and to the console (still a lot of data, but not quite as much). The values have the following meaning: total: total execution time (this is what the unix time command reports as real time). user: user time (this is reported as user time also by the unix time command) system: system time, similar to the unix time command frame: this is the part of the user time that is consumed by the benchmarking framework itself. You can think of this as the time that would still be consumed, even if the benchmarking code itself was empty. This value does not include the time for garbage collection. benchmark: This is the part of the user time that is actually spent within the benchmarking code. That is, the time needed for the benchmarking framework is subtracted. This value, however, includes all garbage collection times. user/interp: This is the part of the user time that is spent in the interpreter (and not in garbage collection) bench/interp: This is the part of the benchmark time that is spent in the interpreter (and not in garbage collection). This value is most probably the one you are interested in, except if you are doing some garbage collection checks. gc: The time spent in garbage collection. However, there are some caveats when using the values: The frame time is estimated based on running an empty benchmark during startup and measuring that time. This can be somewhat inaccurate. This value, however, is then used to compute the benchmark and bench/interp times. I don't know about the accuracy of the other values as reported by (times) and (gc-run-time). Best regards, Dirk Herrmann _______________________________________________ Guile-user mailing list Guile-user@gnu.org http://mail.gnu.org/mailman/listinfo/guile-user