From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Eli Zaretskii Newsgroups: gmane.emacs.devel Subject: Re: Concurrency via isolated process/thread Date: Thu, 06 Jul 2023 18:16:15 +0300 Message-ID: <83r0plgjeo.fsf@gnu.org> References: <871qhnr4ty.fsf@localhost> <83v8ezk3cj.fsf@gnu.org> <87v8ezpov0.fsf@localhost> <83r0pnk2az.fsf@gnu.org> <87pm57pns8.fsf@localhost> <87lefvp55t.fsf@yahoo.com> <87sfa28ura.fsf@localhost> <87cz16o8vz.fsf@yahoo.com> <87jzve8r4m.fsf@localhost> <871qhmo5nv.fsf@yahoo.com> <87bkgq8p5t.fsf@localhost> <831qhmjwk0.fsf@gnu.org> <875y6y8nlr.fsf@localhost> <87h6qhnalc.fsf@yahoo.com> <87ilax71wo.fsf@localhost> <831qhli14t.fsf@gnu.org> <87wmzdxewc.fsf@localhost> Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="31052"; mail-complaints-to="usenet@ciao.gmane.io" Cc: luangruo@yahoo.com, emacs-devel@gnu.org To: Ihor Radchenko Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Thu Jul 06 17:17:29 2023 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qHQjY-0007nC-QB for ged-emacs-devel@m.gmane-mx.org; Thu, 06 Jul 2023 17:17:29 +0200 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qHQiV-0001Vf-BB; Thu, 06 Jul 2023 11:16:23 -0400 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qHQiQ-0001VK-9W for emacs-devel@gnu.org; Thu, 06 Jul 2023 11:16:19 -0400 Original-Received: from fencepost.gnu.org ([2001:470:142:3::e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qHQiO-0003Yu-5V; Thu, 06 Jul 2023 11:16:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=gnu.org; s=fencepost-gnu-org; h=References:Subject:In-Reply-To:To:From:Date: mime-version; bh=U3i645lnHarAW7E5+WtbYnADbHTGVNke5el0ObXUKe0=; b=IlO8124Htanr PZgYNYDazv9yk6wBm+4iwSbpyEN8VhG7LbomWticmKTC+CSLvt3q732GKumy62vA1zF2D3QHt3YC8 gjLd8oMnjdx13xXbircVYYNvncQ0itbvYIDUXXdXJp1juaCi6ZVZCEwp4Z22HyEuWPG/x+pI38FwU WF0B3ygpbrHZl5VHZaqCTiGdoLXut2004mBNTOG/KZRv5Hu5VtFr5jpR7rNPl/lf8OyXZdoWNJ80/ Lm/mZTLDwKOlyapNllvMgS/EGBq9JSRxw1uGwY4F60yekvSCD/3H0OQTNTHcVvl6rPMKapKXj8Ict uFuuFeLjG8GOfQ4f0erChQ==; Original-Received: from [87.69.77.57] (helo=home-c4e4a596f7) by fencepost.gnu.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qHQiN-0003uS-7X; Thu, 06 Jul 2023 11:16:15 -0400 In-Reply-To: <87wmzdxewc.fsf@localhost> (message from Ihor Radchenko on Thu, 06 Jul 2023 15:01:39 +0000) X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:307514 Archived-At: > From: Ihor Radchenko > Cc: luangruo@yahoo.com, emacs-devel@gnu.org > Date: Thu, 06 Jul 2023 15:01:39 +0000 > > Eli Zaretskii writes: > > >> I may be wrong, but from my previous experience with performance > >> benchmarks, memory allocation often takes a significant fraction of CPU > >> time. And memory allocation is a routine process on pretty much every > >> iteration of CPU-intensive code. > > > > Do you have any evidence for that which you can share? GC indeed > > takes significant time, but memory allocation? never heard of that. > > This is from my testing of Org parser. > I noticed that storing a pair of buffer positions is noticeably faster > compared to storing string copies of buffer text. > > The details usually do not show up in M-x profiler reports, but I now > tried perf out of curiosity: > > 14.76% emacs emacs [.] re_match_2_internal > 9.39% emacs emacs [.] re_compile_pattern > 4.45% emacs emacs [.] re_search_2 > 3.98% emacs emacs [.] funcall_subr > > > AFAIU, this is memory allocation. Taking a good one second in this case. > 3.37% emacs emacs [.] allocate_vectorlike It is? Which part(s) of allocate_vectorlike take these 3.37% of run time? It does much more than just allocate memory. > These are just CPU cycles. I am not sure if there are any other > overheads related to memory allocation that translate into extra user time. Well, we need to be pretty damn sure before we consider this a fact, don't we? > > ... But the global lock used by the Lisp threads we have is actually > > such a lock, and the results are well known. > > To be fair, global lock is an extreme worst-case scenario. If you consider the fact that the global state in Emacs is huge, maybe it is a good approximation to what will need to be locked anyway? > Locking specific Lisp objects is unlikely to lock normal Emacs usage, > especially when the async code is written carefully. Except when there > is a need to lock something from global and frequently used Emacs state, > like heap or obarray. Which is why I asked about memory allocation. You forget buffers, windows, frames, variables, and other global stuff.