From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Andrea Corallo Newsgroups: gmane.emacs.devel Subject: Re: New "make benchmark" target Date: Tue, 31 Dec 2024 04:55:26 -0500 Message-ID: References: <87h679kftn.fsf@protonmail.com> <87frm51jkr.fsf@protonmail.com> <861pxpp88q.fsf@gnu.org> <87frm5z06l.fsf@protonmail.com> <86msgdnqmv.fsf@gnu.org> <87wmfhxjce.fsf@protonmail.com> <86jzbhnmzg.fsf@gnu.org> <87o70txew4.fsf@protonmail.com> <871pxorh30.fsf@protonmail.com> Mime-Version: 1.0 Content-Type: text/plain Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="15197"; mail-complaints-to="usenet@ciao.gmane.io" User-Agent: Gnus/5.13 (Gnus v5.13) Cc: Eli Zaretskii , stefankangas@gmail.com, mattiase@acm.org, eggert@cs.ucla.edu, emacs-devel@gnu.org To: Pip Cet Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Tue Dec 31 10:56:30 2024 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tSYzF-0003jl-UE for ged-emacs-devel@m.gmane-mx.org; Tue, 31 Dec 2024 10:56:30 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tSYyi-0003cK-KZ; Tue, 31 Dec 2024 04:55:56 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tSYyg-0003cB-0h for emacs-devel@gnu.org; Tue, 31 Dec 2024 04:55:54 -0500 Original-Received: from fencepost.gnu.org ([2001:470:142:3::e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tSYyf-0002f2-DX; Tue, 31 Dec 2024 04:55:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=gnu.org; s=fencepost-gnu-org; h=MIME-Version:Date:References:In-Reply-To:Subject:To: From; bh=cRCFk7rCqIfMOTHlyI3nJChcv+4NH3/7AUFo62y2tsI=; b=ApzskNgzo8HEgWfP/1Y8 quIBpfC+NaUC4JF2aI/ct0vXbyeud0VfdzC09v3NOXz5KWBGBQovkAbppMQ8ROWY+ttfge8eiG3Mu ZS4RJVvawUif4xER2UJymHqFjwHktxK/BRXMVw6g88F6J/5AwjOM/0XVnDzmElCge/Yi6hXRtzrup ta10JOoeCc7Arft1aUugF3H7Q6qC3lX+Emw8EWbJYoa1ukWrgNbfW9cIw5vydiBVmXAKMTKf6yxpJ +1PLvFf6kZ5zBEDlFM+o5p/UKJ//bXmm1zLEJksnfM/93COaVapqBXhoXvoK7rPPRothsAl+ra+Gm AALrNzwcPQtW7Q==; Original-Received: from acorallo by fencepost.gnu.org with local (Exim 4.90_1) (envelope-from ) id 1tSYyE-0001ao-O4; Tue, 31 Dec 2024 04:55:41 -0500 In-Reply-To: <871pxorh30.fsf@protonmail.com> (Pip Cet's message of "Mon, 30 Dec 2024 21:34:55 +0000") X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:327487 Archived-At: Pip Cet writes: > "Andrea Corallo" writes: >>> Benchmarking is hard, and I wouldn't have provided this very verbose >>> example if I hadn't seen "paradoxical" results that can only be >>> explained by such mechanisms. We need to move away from average run >>> times either way, and that requires code changes. >> >> I'm not sure I understand what you mean, if we prefer something like >> geo-mean in elisp-beanhcmarks we can change for that, should be easy. > > In such situations (machines that don't allow reasonable benchmarks; > this has become the standard situation for me) I've usually found it > necessary to store a bucket histogram (or full history) across many > benchmark runs; this clearly allows you to see the different throttling > levels as separate peaks. If we must use a single number, we want the > fastest actual run This is not how, in my professional experience at least, benchmarks are made/used. If the CPU is throttoling during the execution of a test this has to be measured and reported in the final score as it reflects how the system behaves. Considering only "best scores" is artificial, I see no reason for further complications in this area. >> I'm open to patches to elisp-benchmarks (and to its hypothetical copy in >> emacs-core). My opinion that something can potentially be improved in > > What's the best way to report the need for such improvements? I'm > currently aware of four "bugs" we should definitely fix; one of them, > ideally, before merging. It's an ELPA package so AFAIK the process is the same than for emacs-core. >> it (why not), but I personally ATM don't understand the need for ERT. > > Let's focus on the basics right now: people know how to write ERT tests. > We have hundreds of them. Some of them could be benchmarks, and we want > to make that as easy as possible. Which ones? > ERT provides a way to do that, in the same file if we want to: just add > a tag. > > It provides a way to locate and properly identify resources (five > "bugs": reusing test A as input for test B means we don't have > separation of tests in elisp-benchmarks, and that's something we should > strive for). That (if it's the case) sounds like a very simple fix. > It also allows a third class of tests: stress tests which we want to > execute more often than once per test run, which identify occasional > failures in code that needs to be executed very often to establish > stability (think bug#75105: (cl-random 1.0e+INF) produces an incorrect > result once every 8 million runs). IIRC, right now ERT uses ad-hoc > loops for such tests, but it'd be nicer to expose the repetition count > in the framework (I'm not going to run the non-expensive testsuite on > FreeDOS if that means waiting for a million iterations on an emulated > machine). > > (I also think we should introduce an ert-how structure that describes how > a test is to be run: do we want to inhibit GC or allow it? We definitely don't want to inhibit GC while running benchmarks. Why should we? > Run some > warm-up test runs or not? Of course we should, measuring a fresh state is not realistic, elisp-benchmarks is running an iterations of all tests as warm-up, I think this is good enough. > What's the expected time, and when should we > time out? Bechmark tests are not testsuite tests, they are not supposed to hang nor have long execution time, but anyway we can easily introduce a time-out which all benchmarks has to stay in if we want to be on the safe side. > We can't run the complete matrix for all tests, so we need > some hints in the test, and the lack of a test declaration in > elisp-benchmarks hurts us there). As Eli mentioned, I don't think the goal is to be able to select/run complex matrices of tests here, I believe the typical use cases are two: 1- A user is running all the suite to get the final score (typical use). 2- A developer is running a single benchmark (probably to profile or micro optimize it).