From: Andrea Corallo <acorallo@gnu.org>
To: Pip Cet <pipcet@protonmail.com>
Cc: Eli Zaretskii <eliz@gnu.org>,
stefankangas@gmail.com, mattiase@acm.org, eggert@cs.ucla.edu,
emacs-devel@gnu.org
Subject: Re: New "make benchmark" target
Date: Tue, 31 Dec 2024 04:55:26 -0500 [thread overview]
Message-ID: <yp18qrw6usx.fsf@fencepost.gnu.org> (raw)
In-Reply-To: <871pxorh30.fsf@protonmail.com> (Pip Cet's message of "Mon, 30 Dec 2024 21:34:55 +0000")
Pip Cet <pipcet@protonmail.com> writes:
> "Andrea Corallo" <acorallo@gnu.org> writes:
>>> Benchmarking is hard, and I wouldn't have provided this very verbose
>>> example if I hadn't seen "paradoxical" results that can only be
>>> explained by such mechanisms. We need to move away from average run
>>> times either way, and that requires code changes.
>>
>> I'm not sure I understand what you mean, if we prefer something like
>> geo-mean in elisp-beanhcmarks we can change for that, should be easy.
>
> In such situations (machines that don't allow reasonable benchmarks;
> this has become the standard situation for me) I've usually found it
> necessary to store a bucket histogram (or full history) across many
> benchmark runs; this clearly allows you to see the different throttling
> levels as separate peaks. If we must use a single number, we want the
> fastest actual run
This is not how, in my professional experience at least, benchmarks are
made/used. If the CPU is throttoling during the execution of a test
this has to be measured and reported in the final score as it reflects
how the system behaves. Considering only "best scores" is artificial, I
see no reason for further complications in this area.
>> I'm open to patches to elisp-benchmarks (and to its hypothetical copy in
>> emacs-core). My opinion that something can potentially be improved in
>
> What's the best way to report the need for such improvements? I'm
> currently aware of four "bugs" we should definitely fix; one of them,
> ideally, before merging.
It's an ELPA package so AFAIK the process is the same than for
emacs-core.
>> it (why not), but I personally ATM don't understand the need for ERT.
>
> Let's focus on the basics right now: people know how to write ERT tests.
> We have hundreds of them. Some of them could be benchmarks, and we want
> to make that as easy as possible.
Which ones?
> ERT provides a way to do that, in the same file if we want to: just add
> a tag.
>
> It provides a way to locate and properly identify resources (five
> "bugs": reusing test A as input for test B means we don't have
> separation of tests in elisp-benchmarks, and that's something we should
> strive for).
That (if it's the case) sounds like a very simple fix.
> It also allows a third class of tests: stress tests which we want to
> execute more often than once per test run, which identify occasional
> failures in code that needs to be executed very often to establish
> stability (think bug#75105: (cl-random 1.0e+INF) produces an incorrect
> result once every 8 million runs). IIRC, right now ERT uses ad-hoc
> loops for such tests, but it'd be nicer to expose the repetition count
> in the framework (I'm not going to run the non-expensive testsuite on
> FreeDOS if that means waiting for a million iterations on an emulated
> machine).
>
> (I also think we should introduce an ert-how structure that describes how
> a test is to be run: do we want to inhibit GC or allow it?
We definitely don't want to inhibit GC while running benchmarks. Why
should we?
> Run some
> warm-up test runs or not?
Of course we should, measuring a fresh state is not realistic,
elisp-benchmarks is running an iterations of all tests as warm-up, I
think this is good enough.
> What's the expected time, and when should we
> time out?
Bechmark tests are not testsuite tests, they are not supposed to hang
nor have long execution time, but anyway we can easily introduce a
time-out which all benchmarks has to stay in if we want to be on the
safe side.
> We can't run the complete matrix for all tests, so we need
> some hints in the test, and the lack of a test declaration in
> elisp-benchmarks hurts us there).
As Eli mentioned, I don't think the goal is to be able to select/run
complex matrices of tests here, I believe the typical use cases are two:
1- A user is running all the suite to get the final score (typical use).
2- A developer is running a single benchmark (probably to profile or
micro optimize it).
next prev parent reply other threads:[~2024-12-31 9:55 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-11 22:37 Improving EQ Pip Cet via Emacs development discussions.
2024-12-12 6:36 ` Eli Zaretskii
2024-12-12 8:23 ` Andrea Corallo
2024-12-12 8:36 ` Pip Cet via Emacs development discussions.
2024-12-12 9:18 ` Eli Zaretskii
2024-12-12 9:35 ` Visuwesh
2024-12-12 10:40 ` Andrea Corallo
2024-12-12 17:46 ` Pip Cet via Emacs development discussions.
2024-12-12 19:09 ` Eli Zaretskii
2024-12-12 10:53 ` New "make benchmark" target Stefan Kangas
2024-12-12 10:59 ` Andrea Corallo
2024-12-12 16:53 ` Pip Cet via Emacs development discussions.
2024-12-13 0:49 ` Stefan Kangas
2024-12-13 7:37 ` Andrea Corallo
2024-12-14 12:00 ` Stefan Kangas
2024-12-14 14:06 ` Stefan Monnier
2024-12-14 11:34 ` Pip Cet via Emacs development discussions.
2024-12-14 11:58 ` Stefan Kangas
2024-12-14 20:07 ` Pip Cet via Emacs development discussions.
2024-12-14 20:20 ` João Távora
2024-12-15 0:57 ` Stefan Kangas
2024-12-22 16:04 ` Pip Cet via Emacs development discussions.
2024-12-29 10:47 ` Andrea Corallo
2024-12-30 11:45 ` Pip Cet via Emacs development discussions.
2024-12-30 14:15 ` Eli Zaretskii
2024-12-30 15:00 ` Pip Cet via Emacs development discussions.
2024-12-30 15:21 ` Eli Zaretskii
2024-12-30 15:49 ` Pip Cet via Emacs development discussions.
2024-12-30 15:53 ` João Távora
2024-12-30 16:40 ` Eli Zaretskii
2024-12-30 17:25 ` Pip Cet via Emacs development discussions.
2024-12-30 18:16 ` Eli Zaretskii
2024-12-31 4:00 ` Pip Cet via Emacs development discussions.
2024-12-31 5:26 ` Stefan Kangas
2024-12-31 13:05 ` Eli Zaretskii
2024-12-31 14:14 ` Pip Cet via Emacs development discussions.
2024-12-31 14:22 ` Eli Zaretskii
2024-12-31 12:53 ` Eli Zaretskii
2024-12-31 14:34 ` Andrea Corallo
2024-12-30 18:26 ` Andrea Corallo
2024-12-30 18:58 ` Stefan Kangas
2024-12-30 21:34 ` Pip Cet via Emacs development discussions.
2024-12-31 9:55 ` Andrea Corallo [this message]
2024-12-31 12:43 ` Eli Zaretskii
2024-12-31 14:01 ` Pip Cet via Emacs development discussions.
2024-12-15 0:58 ` Stefan Kangas
2024-12-12 10:42 ` Improving EQ Óscar Fuentes
2024-12-12 10:50 ` Andrea Corallo
2024-12-12 11:21 ` Óscar Fuentes
2024-12-13 12:24 ` Pip Cet via Emacs development discussions.
2024-12-12 17:05 ` Pip Cet via Emacs development discussions.
2024-12-12 18:10 ` John ff
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=yp18qrw6usx.fsf@fencepost.gnu.org \
--to=acorallo@gnu.org \
--cc=eggert@cs.ucla.edu \
--cc=eliz@gnu.org \
--cc=emacs-devel@gnu.org \
--cc=mattiase@acm.org \
--cc=pipcet@protonmail.com \
--cc=stefankangas@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/emacs.git
https://git.savannah.gnu.org/cgit/emacs/org-mode.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.