From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Ihor Radchenko Newsgroups: gmane.emacs.devel Subject: [PATCH] Re: Bignum performance (was: Shrinking the C core) Date: Fri, 11 Aug 2023 14:07:57 +0000 Message-ID: <87bkfdsmde.fsf@localhost> References: <20230809094655.793FC18A4654@snark.thyrsus.com> <87il9owg0f.fsf@yahoo.com> <83fs4rjq9j.fsf@gnu.org> <87jzu2tvfc.fsf@dataswamp.org> <87y1ih3mc1.fsf@localhost> <87h6p5kcek.fsf@dataswamp.org> <87msyxoj2t.fsf@localhost> <875y5lkb4b.fsf@dataswamp.org> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="38602"; mail-complaints-to="usenet@ciao.gmane.io" Cc: emacs-devel@gnu.org To: Emanuel Berg Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Fri Aug 11 16:08:37 2023 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qUSof-0009rG-0Z for ged-emacs-devel@m.gmane-mx.org; Fri, 11 Aug 2023 16:08:37 +0200 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qUSni-0007ny-Nj; Fri, 11 Aug 2023 10:07:38 -0400 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qUSng-0007nY-5n for emacs-devel@gnu.org; Fri, 11 Aug 2023 10:07:36 -0400 Original-Received: from mout02.posteo.de ([185.67.36.66]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qUSnd-000759-1a for emacs-devel@gnu.org; Fri, 11 Aug 2023 10:07:35 -0400 Original-Received: from submission (posteo.de [185.67.36.169]) by mout02.posteo.de (Postfix) with ESMTPS id E1FDB240104 for ; Fri, 11 Aug 2023 16:07:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=posteo.net; s=2017; t=1691762850; bh=PK9b1rzjz3FTC3ct4hxr5SkyhnW3ulkFiliLCY87u8M=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version:From; b=CNHdTffHfLeLiQEEJdgAwWHv4J/bVD89LOMKa9FI76PC4W0/COCT4OeCYT351XWdS SDlRvPRwb0bNekbtO/I7k+shP/2gaXirvKXG1VpBZAMNNNDbvakwc6jZ/A8ndol7XA Sx9BK0PbB4F4Std/dDKAO9iO7BERvUKyFlz03OI8bjVNBwe5AwFwzGvUYznk05SyaW M2vVfRqwlt9T1qKjoE/xoU4lMlgCB+cy0z/PQosLpX8jbzn9SawmnPhQCnikRoSgGa Wg2o65xvy213EyyHBhgNeF5zTUlFkj4NSt/xRHGjg1oe2P2IiwKnwb7+sMGD50qWIh xeZQp7q6i+YyA== Original-Received: from customer (localhost [127.0.0.1]) by submission (posteo.de) with ESMTPSA id 4RMlwy27xbz6txK; Fri, 11 Aug 2023 16:07:29 +0200 (CEST) In-Reply-To: <875y5lkb4b.fsf@dataswamp.org> Received-SPF: pass client-ip=185.67.36.66; envelope-from=yantar92@posteo.net; helo=mout02.posteo.de X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:308581 Archived-At: --=-=-= Content-Type: text/plain Emanuel Berg writes: >> Maybe we could somehow re-use the already allocated bignum >> objects, similar to what is done for cons cells (see >> src/alloc.c:Fcons). > > Sounds reasonable :) And... is has been already done, actually. allocate_vectorlike calls allocate_vector_from_block, which re-uses pre-allocated objects. And looking into the call graph, this exact branch calling allocate_vector_from_block is indeed called for the bignums: 33.05% 0.00% emacs [unknown] [.] 0000000000000000 | ---0 | |--28.04%--allocate_vectorlike | | | --27.78%--allocate_vector_from_block (inlined) | | | |--2.13%--next_vector (inlined) | | | --0.74%--setup_on_free_list (inlined) If it manually cut off `allocate_vector_from_block', the benchmark time increases twice. So, there is already some improvement coming from re-using allocated memory. I looked deeper into the code tried to cut down on unnecessary looping over the pre-allocated `vector_free_lists'. See the attached patch. Without the patch: perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln 2.321 s 28.60% emacs emacs [.] allocate_vectorlike 24.36% emacs emacs [.] process_mark_stack 3.76% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase 3.59% emacs emacs [.] pdumper_marked_p_impl 3.53% emacs emacs [.] mark_char_table With the patch: perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln 1.968 s 33.17% emacs emacs [.] process_mark_stack 5.51% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase 5.05% emacs emacs [.] mark_char_table 4.88% emacs emacs [.] pdumper_marked_p_impl 3.30% emacs emacs [.] pdumper_set_marked_impl ... 2.52% emacs emacs [.] allocate_vectorlike allocate_vectorlike clearly takes a lot less time by not trying to loop over all the ~500 empty elements of vector_free_lists. We can further get rid of the GC by temporarily disabling it (just for demonstration): (let ((beg (float-time))) (setq gc-cons-threshold most-positive-fixnum) (fib 10000 1000) (message "%.3f s" (- (float-time) beg)) ) perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln 0.739 s 17.11% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase 7.35% emacs libgmp.so.10.5.0 [.] __gmpz_add 6.51% emacs emacs [.] arith_driver 6.03% emacs libc.so.6 [.] malloc 5.57% emacs emacs [.] allocate_vectorlike 5.20% emacs [unknown] [k] 0xffffffffaae01857 4.16% emacs libgmp.so.10.5.0 [.] __gmpn_add_n_coreisbr 3.72% emacs emacs [.] check_number_coerce_marker 3.35% emacs fib.eln [.] F666962_fib_0 3.29% emacs emacs [.] allocate_pseudovector 2.30% emacs emacs [.] Flss Now, the actual bignum arithmetics (lisp/gmp.c) takes most of the CPU time. I am not sure what differs between Elisp gmp bindings and analogous SBCL binding so that SBCL is so much faster. --=-=-= Content-Type: text/x-patch Content-Disposition: inline; filename=allocate_vector_from_block.diff diff --git a/src/alloc.c b/src/alloc.c index 17ca5c725d0..62e96b4c9de 100644 --- a/src/alloc.c +++ b/src/alloc.c @@ -3140,6 +3140,7 @@ large_vector_vec (struct large_vector *p) vectors of the same NBYTES size, so NTH == VINDEX (NBYTES). */ static struct Lisp_Vector *vector_free_lists[VECTOR_MAX_FREE_LIST_INDEX]; +static int vector_free_lists_min_idx = VECTOR_MAX_FREE_LIST_INDEX; /* Singly-linked list of large vectors. */ @@ -3176,6 +3177,8 @@ setup_on_free_list (struct Lisp_Vector *v, ptrdiff_t nbytes) set_next_vector (v, vector_free_lists[vindex]); ASAN_POISON_VECTOR_CONTENTS (v, nbytes - header_size); vector_free_lists[vindex] = v; + if ( vindex < vector_free_lists_min_idx ) + vector_free_lists_min_idx = vindex; } /* Get a new vector block. */ @@ -3230,8 +3233,8 @@ allocate_vector_from_block (ptrdiff_t nbytes) /* Next, check free lists containing larger vectors. Since we will split the result, we should have remaining space large enough to use for one-slot vector at least. */ - for (index = VINDEX (nbytes + VBLOCK_BYTES_MIN); - index < VECTOR_MAX_FREE_LIST_INDEX; index++) + for (index = max ( VINDEX (nbytes + VBLOCK_BYTES_MIN), vector_free_lists_min_idx ); + index < VECTOR_MAX_FREE_LIST_INDEX; index++, vector_free_lists_min_idx++) if (vector_free_lists[index]) { /* This vector is larger than requested. */ @@ -3413,6 +3416,7 @@ sweep_vectors (void) gcstat.total_vectors = 0; gcstat.total_vector_slots = gcstat.total_free_vector_slots = 0; memset (vector_free_lists, 0, sizeof (vector_free_lists)); + vector_free_lists_min_idx = VECTOR_MAX_FREE_LIST_INDEX; /* Looking through vector blocks. */ --=-=-= Content-Type: text/plain -- Ihor Radchenko // yantar92, Org mode contributor, Learn more about Org mode at . Support Org development at , or support my work at --=-=-=--