From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Eli Zaretskii Newsgroups: gmane.emacs.devel Subject: Re: New "make benchmark" target Date: Mon, 06 Jan 2025 16:46:15 +0200 Message-ID: <86ikqs57bc.fsf@gnu.org> References: <87h679kftn.fsf@protonmail.com> <87frm5z06l.fsf@protonmail.com> <86msgdnqmv.fsf@gnu.org> <87wmfhxjce.fsf@protonmail.com> <86jzbhnmzg.fsf@gnu.org> <87o70txew4.fsf@protonmail.com> <871pxorh30.fsf@protonmail.com> <86wmfgm3a5.fsf@gnu.org> <87pll2fsj7.fsf@protonmail.com> Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="2384"; mail-complaints-to="usenet@ciao.gmane.io" Cc: pipcet@protonmail.com, stefankangas@gmail.com, mattiase@acm.org, eggert@cs.ucla.edu, emacs-devel@gnu.org To: Andrea Corallo Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Mon Jan 06 15:47:17 2025 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tUoNw-0000Rn-Gm for ged-emacs-devel@m.gmane-mx.org; Mon, 06 Jan 2025 15:47:16 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tUoNA-0004QD-GI; Mon, 06 Jan 2025 09:46:28 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tUoN8-0004Pv-AN for emacs-devel@gnu.org; Mon, 06 Jan 2025 09:46:27 -0500 Original-Received: from fencepost.gnu.org ([2001:470:142:3::e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tUoN6-0006ul-Mx; Mon, 06 Jan 2025 09:46:24 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=gnu.org; s=fencepost-gnu-org; h=References:Subject:In-Reply-To:To:From:Date: mime-version; bh=lTipu7CA6CewQi6veUI9YJ8AuoMX2PDJzT0bPxbWkAQ=; b=OqQ+L5B1IL9r EdvrwbOlfnB/Xpp6AjTLsLgCicu5n/bQ7RJD/apAcvm+8rRyxZ3E9MSFlImuISU0OXvugekm18egs eoWedXgGhC2wBhp957Yv5WHqC5JLkVCmMnjVHvGWUFliDCIHL/xqHqRI+4lzDHJx3vWsp74YVBppi xVDkGbEpqUIQu4lBzaVkJt5BzLFYR5HC/gGBnLN8P8/pVwxPABeBztSkb0PUGhTle0AINXlFQNn3r CroQQnrOnJIqYZH0IkdROKtt544U7e4fXiejzJv6UE9+SBd1SUiSiyH8h6oSEpoD55OdaoZne6oWb TPWYIDkua8wCgW624V6XiA==; In-Reply-To: (message from Andrea Corallo on Mon, 06 Jan 2025 06:23:22 -0500) X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:327737 Archived-At: > From: Andrea Corallo > Cc: Eli Zaretskii , stefankangas@gmail.com, > mattiase@acm.org, eggert@cs.ucla.edu, emacs-devel@gnu.org > Date: Mon, 06 Jan 2025 06:23:22 -0500 > > Pip Cet writes: > > > In particular, as you (Andrea) correctly pointed out, it is sometimes > > appropriate to use an average run time (or, non-equivalently, an average > > speed) for reporting test results; the assumptions needed for this are > > very significant and need to be spelled out explicitly. The vast > > majority of "make benchmark" uses which I think should happen cannot > > meet these stringent requirements. > > > > To put things simply, it is better to discard outliers (test runs which > > take significantly longer than the rest). Averaging doesn't do that: it > > simply ruins your entire test run if there is a significant outlier. > > IOW, running the benchmarks with a large repetition count is very likely > > to result in useful data being discarded, and a useless result. > > As mentioned, I disagree with having some logic put in place to > arbitrarily decide which value is worth to be considered and which value > should be discarded. If a system is producing noisy measures this has > to be reported as error of the measure. Those numbers are there for > some real reason and have to be accounted. Without too deep understanding of the underlying issue: IME, if some sample can include outliers, it is always better to use robust estimators, rather than attempt to detect and discard outliers. That's because detection of outliers can decide that a valid measurement is an outlier, and then the estimation becomes biased. In practical terms, for estimating the mean, I can suggest to use the sample median instead of the sample average. The median is very robust to outliers, and only slightly less efficient (i.e., converges a bit slower) than the sample average.