From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.org!not-for-mail From: Nix Newsgroups: gmane.emacs.devel Subject: Re: Emacs bzr memory footprint Date: Fri, 21 Oct 2011 01:19:53 +0100 Message-ID: <87ty73mc0m.fsf@spindle.srvr.nix> References: <83fwix2osa.fsf@gnu.org> <0B3EE7A4-D0D6-4D1E-ADC4-0BEE68F179B2@mit.edu> <87fwivwp37.fsf@turtle.gmx.de> <87sjmvpmd2.fsf@lifelogs.com> <87aa93wmc4.fsf@turtle.gmx.de> <87sjmnrdjw.fsf@spindle.srvr.nix> NNTP-Posting-Host: lo.gmane.org Mime-Version: 1.0 Content-Type: text/plain X-Trace: dough.gmane.org 1319156406 32538 80.91.229.12 (21 Oct 2011 00:20:06 GMT) X-Complaints-To: usenet@dough.gmane.org NNTP-Posting-Date: Fri, 21 Oct 2011 00:20:06 +0000 (UTC) Cc: emacs-devel@gnu.org To: John Wiegley Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane.org@gnu.org Fri Oct 21 02:20:02 2011 Return-path: Envelope-to: ged-emacs-devel@m.gmane.org Original-Received: from lists.gnu.org ([140.186.70.17]) by lo.gmane.org with esmtp (Exim 4.69) (envelope-from ) id 1RH2qA-0002YE-Gu for ged-emacs-devel@m.gmane.org; Fri, 21 Oct 2011 02:20:02 +0200 Original-Received: from localhost ([::1]:57270 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RH2q9-0000PY-OQ for ged-emacs-devel@m.gmane.org; Thu, 20 Oct 2011 20:20:01 -0400 Original-Received: from eggs.gnu.org ([140.186.70.92]:48670) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RH2q6-0000PK-Gb for emacs-devel@gnu.org; Thu, 20 Oct 2011 20:19:59 -0400 Original-Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RH2q4-00071X-TM for emacs-devel@gnu.org; Thu, 20 Oct 2011 20:19:58 -0400 Original-Received: from icebox.esperi.org.uk ([81.187.191.129]:48023 helo=mail.esperi.org.uk) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RH2q4-000713-Eb for emacs-devel@gnu.org; Thu, 20 Oct 2011 20:19:56 -0400 Original-Received: from esperi.org.uk (nix@spindle.srvr.nix [192.168.14.15]) by mail.esperi.org.uk (8.14.5/8.14.5) with ESMTP id p9L0Jr1o011963; Fri, 21 Oct 2011 01:19:53 +0100 Original-Received: (from nix@localhost) by esperi.org.uk (8.14.5/8.14.5/Submit) id p9L0Jrjb003421; Fri, 21 Oct 2011 01:19:53 +0100 Emacs: well, why *shouldn't* you pay property taxes on your editor? In-Reply-To: (John Wiegley's message of "Thu, 20 Oct 2011 18:02:00 -0500") User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.0.50 (gnu/linux) X-DCC-STAT_FI_X86_64_VIRTUAL-Metrics: spindle 1245; Body=2 Fuz1=2 Fuz2=2 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 81.187.191.129 X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane.org@gnu.org Xref: news.gmane.org gmane.emacs.devel:145377 Archived-At: On 21 Oct 2011, John Wiegley verbalised: >>>>>> Nix writes: > >> Gnus is clearly driving Emacs much harder than mere cc-mode ever does. I'm >> particularly stunned by the number of string-chars in use. It must have some >> huge variables defined, probably holding overview data or something... I'm >> not sure how large some of those things are: if they're large structures >> this might explain most or all of the memory consumption. > > Do you using the Gnus Registry? I hadn't noticed it existed, so no (though it looks rather nice and I shall probably be using it in due course, pushing my memory usage up even further). > Which backends do you use? nnml, nntp, a couple of tiny nndoc, and one, one single huge nnmh group (containing filtered spam less than six months old). > How many messages > are in mailboxes that you open? Some of them are quite large: my primary nnml mailbox has 16000 mails in it, with a 6Mb overview (though only 60-odd are visible). Some of my nntp groups never expire so are very, very large (100000-odd articles), but, again, only a few dozen to a few hundred articles will be unread and visible at any time. (The largest overview for a single group is 46Mb, but if Gnus is reading the entire overview database for an nntp group in, it's doing something wrong!) However, this cannot explain the memory consumption, because I check most of these groups out within a few minutes of starting Emacs, and memory consumption then is around 300Mb. The rise from then on is inexorable, though not steady: where the figures for ten hours ago were STIME RSS VSZ Oct07 832348 1127088 Oct07 226916 499588 now they are STIME RSS VSZ Oct07 876524 1170572 Oct07 227016 499588 So the coding Emacs has hardly budged, but the newsreading one has chewed up another 50Mb. It's a good thing this machine has 24Gb RAM :) pmap shows thwe following sizeable anonymous regions: 0000000001f18000 850596K rw--- [ anon ] 00007fbf8ff29000 4K rw--- [ anon ] 00007fbf98000000 132K rw--- [ anon ] 00007fbf98021000 65404K ----- [ anon ] 00007fbf9f9b4000 4K ----- [ anon ] 00007fbf9f9b5000 32768K rw--- [ anon ] 00007fbfa1f6d000 772K rw--- [ anon ] 00007fbfa22b6000 1260K rw--- [ anon ] 00007fbfa266a000 236K rw--- [ anon ] 00007fbfa2ca1000 76K rw--- [ anon ] 00007fbfa2dc2000 772K rw--- [ anon ] 00007fbfa2e83000 28K rw--- [ anon ] 00007fbfa2ec9000 4K rw--- [ anon ] 00007fbfa2f0e000 1544K rw--- [ anon ] 00007fbfa309b000 168K rw--- [ anon ] Heap fragmentation might explain this, but most of the big allocations (e.g. for huge overviews) should be going into separately mmap()ed regions and getting freed, not into that 850Mb pig of a heap. With luck it's just one, but luck does not accompany me on trips like this. (I too spent some time fruitlessly instrumenting XEmacs for signs of the cause of its huge memory usage and came to the same conclusion as Stephen: it's not XEmacs, it's the toolkits. The same may be true here, though at least I'm using Lucid, not Gtk, so we can rule out *that* mountain of code.) Also note that XEmacs's huge memory usage was accompanied by a radical slowdown in GC times that eventually forced a restart if I was to get anything done. By contrast, this ballooning is not accompanied by any slowdown in GC: a GC still takes only about 1/5s, barely slower than when Emacs is freshly started. Hm. On second thought, IIRC Gnus allocates some very large lists as part of its overview management or something: perhaps this is serving to spam the arena with a huge number of (individually small, thus not mmap()-allocated) atoms which, when they get freed later, produce a very sparsely-filled, severely-fragmented heap? If so, perhaps Emacs would benefit from a simple pool allocator accessed via a new let/setq form or a new arg to create-buffer, so Gnus could arrange to stuff variables it knows will be huge, or buffer-local variables of buffers it thinks may have lots of huge buffer-local vars, into a newly-mmap()ed region? Unfortunately that means, sigh, using our own malloc() again, which is probably more painful than useful. I suspect actually proving my contention first would be a good idea. Not sure how to get the addresses of Lisp objects from a running Emacs though: gdb, presumably. -- NULL && (void)