From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Ihor Radchenko Newsgroups: gmane.emacs.devel Subject: Re: Concurrency via isolated process/thread Date: Thu, 06 Jul 2023 15:01:39 +0000 Message-ID: <87wmzdxewc.fsf@localhost> References: <871qhnr4ty.fsf@localhost> <83v8ezk3cj.fsf@gnu.org> <87v8ezpov0.fsf@localhost> <83r0pnk2az.fsf@gnu.org> <87pm57pns8.fsf@localhost> <87lefvp55t.fsf@yahoo.com> <87sfa28ura.fsf@localhost> <87cz16o8vz.fsf@yahoo.com> <87jzve8r4m.fsf@localhost> <871qhmo5nv.fsf@yahoo.com> <87bkgq8p5t.fsf@localhost> <831qhmjwk0.fsf@gnu.org> <875y6y8nlr.fsf@localhost> <87h6qhnalc.fsf@yahoo.com> <87ilax71wo.fsf@localhost> <831qhli14t.fsf@gnu.org> Mime-Version: 1.0 Content-Type: text/plain Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="33007"; mail-complaints-to="usenet@ciao.gmane.io" Cc: luangruo@yahoo.com, emacs-devel@gnu.org To: Eli Zaretskii Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Thu Jul 06 17:02:42 2023 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qHQVF-0008MK-Vs for ged-emacs-devel@m.gmane-mx.org; Thu, 06 Jul 2023 17:02:42 +0200 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qHQUP-0004JI-Al; Thu, 06 Jul 2023 11:01:49 -0400 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qHQUN-0004Ih-Gl for emacs-devel@gnu.org; Thu, 06 Jul 2023 11:01:47 -0400 Original-Received: from mout02.posteo.de ([185.67.36.66]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qHQUI-0003nK-0p for emacs-devel@gnu.org; Thu, 06 Jul 2023 11:01:46 -0400 Original-Received: from submission (posteo.de [185.67.36.169]) by mout02.posteo.de (Postfix) with ESMTPS id CD20B240101 for ; Thu, 6 Jul 2023 17:01:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=posteo.net; s=2017; t=1688655699; bh=PhzZ2hBJ3C9JZ+4bez5OLjkBMW4JnbhPAZBMZrhe4xo=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version:From; b=KZROc/XE18eBopNlG3I6hkOGH3+wPml+ZT7TasennEPUA7v55m/oHwddQ4r5o8u6k 8PpIWGQKgqqcvwMWhhY54mjuB5oa7wFW2ZQWOfko9IoYxmqcqqz/I6NEgmIY/3Tml0 VxKB4Z29BvOhLfVxn7bsWlP3r1UZza65WuIH2CHftcI4tDClGyof7Rnc1XMPFrZM1H iQrgVTIUoC+1E69pdBLtZJfGP1Fqewz47DYJKfj0fCTWSJMG0YMca6+9M4pyldNYhE V6Kx/4F4zW1wvw8jqj0n+yJnoleniq4DE1KAt2dzzfzvM1aptfMBkNHRuEQSC9UiyQ TSNqxQWy/sY+g== Original-Received: from customer (localhost [127.0.0.1]) by submission (posteo.de) with ESMTPSA id 4Qxfr30qYMz9rxL; Thu, 6 Jul 2023 17:01:38 +0200 (CEST) In-Reply-To: <831qhli14t.fsf@gnu.org> Received-SPF: pass client-ip=185.67.36.66; envelope-from=yantar92@posteo.net; helo=mout02.posteo.de X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:307511 Archived-At: Eli Zaretskii writes: >> I may be wrong, but from my previous experience with performance >> benchmarks, memory allocation often takes a significant fraction of CPU >> time. And memory allocation is a routine process on pretty much every >> iteration of CPU-intensive code. > > Do you have any evidence for that which you can share? GC indeed > takes significant time, but memory allocation? never heard of that. This is from my testing of Org parser. I noticed that storing a pair of buffer positions is noticeably faster compared to storing string copies of buffer text. The details usually do not show up in M-x profiler reports, but I now tried perf out of curiosity: 14.76% emacs emacs [.] re_match_2_internal 9.39% emacs emacs [.] re_compile_pattern 4.45% emacs emacs [.] re_search_2 3.98% emacs emacs [.] funcall_subr AFAIU, this is memory allocation. Taking a good one second in this case. 3.37% emacs emacs [.] allocate_vectorlike 3.17% emacs emacs [.] Ffuncall 3.01% emacs emacs [.] exec_byte_code 2.90% emacs emacs [.] buf_charpos_to_bytepos 2.82% emacs emacs [.] find_interval 2.74% emacs emacs [.] re_iswctype 2.57% emacs emacs [.] set_default_internal 2.48% emacs emacs [.] plist_get 2.24% emacs emacs [.] Fmemq 1.95% emacs emacs [.] process_mark_stack These are just CPU cycles. I am not sure if there are any other overheads related to memory allocation that translate into extra user time. >> Would it be of interest to allow locking objects for read/write using >> semantics similar to `with-mutex'? > > Locking slows down software and should be avoided, I'm sure you know > it. I am not sure anymore. Po Lu appears to advocate for locking instead of isolated process approach, referring to other software. > ... But the global lock used by the Lisp threads we have is actually > such a lock, and the results are well known. To be fair, global lock is an extreme worst-case scenario. Locking specific Lisp objects is unlikely to lock normal Emacs usage, especially when the async code is written carefully. Except when there is a need to lock something from global and frequently used Emacs state, like heap or obarray. Which is why I asked about memory allocation. The need to avoid locked heap is the main reason I proposed isolated processes. ... well, also things like lisp_eval_depth and other global interpreter state variables. But, AFAIU, Po Lu is advocating that trying isolated processes is not worth the effort and it is better to byte the bullet and implement proper locking. -- Ihor Radchenko // yantar92, Org mode contributor, Learn more about Org mode at . Support Org development at , or support my work at