* bug#43389: 28.0.50; Emacs memory leaks
2020-09-14 0:43 bug#43389: 28.0.50; Emacs memory leaks Michael Heerdegen
@ 2020-09-14 19:09 ` Juri Linkov
2020-09-15 0:32 ` Michael Heerdegen
2020-09-17 20:59 ` Thomas Ingram
` (3 subsequent siblings)
4 siblings, 1 reply; 110+ messages in thread
From: Juri Linkov @ 2020-09-14 19:09 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389
> from time to time my Emacs' memory usage grows above 4 GB for no obvious
> reason. I didn't investigate when that happened so far, will do the
> next time.
>
> Anybody who sees the same problem is invited to provide details!
Maybe manually evaluating (clear-image-cache) helps to free memory?
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-14 19:09 ` Juri Linkov
@ 2020-09-15 0:32 ` Michael Heerdegen
2020-09-15 17:54 ` Russell Adams
0 siblings, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-09-15 0:32 UTC (permalink / raw)
To: Juri Linkov; +Cc: 43389
Juri Linkov <juri@linkov.net> writes:
> Maybe manually evaluating (clear-image-cache) helps to free memory?
I'll try the next time when this happens. I would not expect the image
cache to be the cause though: I don't view many images in Emacs, and I
typically rebuild and restart Emacs daily.
Thanks,
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-15 0:32 ` Michael Heerdegen
@ 2020-09-15 17:54 ` Russell Adams
2020-09-15 18:52 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Russell Adams @ 2020-09-15 17:54 UTC (permalink / raw)
To: 43389
On Tue, Sep 15, 2020 at 02:32:19AM +0200, Michael Heerdegen wrote:
> I'll try the next time when this happens. I would not expect the image
> cache to be the cause though: I don't view many images in Emacs, and I
> typically rebuild and restart Emacs daily.
htop says my emacs RSS is now 5148MB. I ran M-x garbage-collect and it ran
at 100% cpu for 5 minutes and released nothing. I also tried manually
executing (clear-image-cache) and nothing.
I run Emacs 27.1 as a daemon, uptime 4 days, 3 hours, 22 minutes, 53
seconds. Yesterday conki was reporting Emacs at 28% memory usage,
today it's at 33%. No dramatically huge files loaded, just a few
megabytes of text. No inline images (local or remote).
In GNU Emacs 27.1 (build 2, x86_64-pc-linux-gnu, X toolkit, Xaw3d scroll bars)
of 2020-08-17 built on maokai
Windowing system distributor 'The X.Org Foundation', version 11.0.12008000
System Description: Gentoo/Linux
Recent messages:
Unable to load color "unspecified-fg" [4 times]
4 days, 3 hours, 22 minutes, 53 seconds
Configured using:
'configure --prefix=/home/adamsrl/.local/stow/emacs-27.1
--without-libsystemd --without-dbus --with-x-toolkit=lucid'
Configured features:
XAW3D XPM JPEG TIFF GIF PNG RSVG SOUND GSETTINGS GLIB NOTIFY INOTIFY ACL
GNUTLS LIBXML2 FREETYPE HARFBUZZ XFT ZLIB TOOLKIT_SCROLL_BARS LUCID X11
XDBE XIM MODULES THREADS JSON PDUMPER LCMS2 GMP
Important settings:
value of $LANG: en_US.utf8
locale-coding-system: utf-8-unix
Major mode: Org
Minor modes in effect:
recentf-mode: t
flyspell-mode: t
pdf-occur-global-minor-mode: t
helm-mode: t
helm-ff-cache-mode: t
helm--remap-mouse-mode: t
async-bytecomp-package-mode: t
shell-dirtrack-mode: t
show-paren-mode: t
savehist-mode: t
global-hl-line-mode: t
override-global-mode: t
tooltip-mode: t
global-eldoc-mode: t
electric-indent-mode: t
mouse-wheel-mode: t
file-name-shadow-mode: t
global-font-lock-mode: t
font-lock-mode: t
auto-composition-mode: t
auto-encryption-mode: t
auto-compression-mode: t
column-number-mode: t
line-number-mode: t
auto-fill-function: org-auto-fill-function
abbrev-mode: t
Load-path shadows:
/home/adamsrl/.quicklisp/dists/quicklisp/software/slime-v2.24/slime-tests hides /home/adamsrl/.config/emacs/elpa/slime-20200810.224/slime-tests
/home/adamsrl/.quicklisp/dists/quicklisp/software/slime-v2.24/slime hides /home/adamsrl/.config/emacs/elpa/slime-20200810.224/slime
/home/adamsrl/.quicklisp/dists/quicklisp/software/slime-v2.24/slime-autoloads hides /home/adamsrl/.config/emacs/elpa/slime-20200810.224/slime-autoloads
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-stan hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-stan
/home/adamsrl/.config/emacs/elpa/org-20200810/org-macs hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-macs
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-gnuplot hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-gnuplot
/home/adamsrl/.config/emacs/elpa/org-20200810/org-num hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-num
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sql hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sql
/home/adamsrl/.config/emacs/elpa/org-20200810/org-lint hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-lint
/home/adamsrl/.config/emacs/elpa/org-20200810/ol hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol
/home/adamsrl/.config/emacs/elpa/org-20200810/org-indent hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-indent
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-perl hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-perl
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lisp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lisp
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-maxima hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-maxima
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-tangle hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-tangle
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-vala hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-vala
/home/adamsrl/.config/emacs/elpa/org-20200810/org-tempo hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-tempo
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-comint hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-comint
/home/adamsrl/.config/emacs/elpa/org-20200810/org-list hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-list
/home/adamsrl/.config/emacs/elpa/org-20200810/org-src hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-src
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-irc hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-irc
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-hledger hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-hledger
/home/adamsrl/.config/emacs/elpa/org-20200810/org-goto hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-goto
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-latex hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-latex
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-latex hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-latex
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-org hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-org
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-exp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-exp
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-abc hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-abc
/home/adamsrl/.config/emacs/elpa/org-20200810/ox hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-groovy hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-groovy
/home/adamsrl/.config/emacs/elpa/org-20200810/org-mouse hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-mouse
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-publish hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-publish
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-coq hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-coq
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ocaml hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ocaml
/home/adamsrl/.config/emacs/elpa/org-20200810/org-version hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-version
/home/adamsrl/.config/emacs/elpa/org-20200810/org-habit hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-habit
/home/adamsrl/.config/emacs/elpa/org-20200810/org-agenda hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-agenda
/home/adamsrl/.config/emacs/elpa/org-20200810/org-ctags hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-ctags
/home/adamsrl/.config/emacs/elpa/org-20200810/org-attach hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-attach
/home/adamsrl/.config/emacs/elpa/org-20200810/org-colview hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-colview
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-rmail hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-rmail
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-matlab hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-matlab
/home/adamsrl/.config/emacs/elpa/org-20200810/org-install hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-install
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-bibtex hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-bibtex
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-eval hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-eval
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-makefile hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-makefile
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-calc hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-calc
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-python hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-python
/home/adamsrl/.config/emacs/elpa/org-20200810/org-timer hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-timer
/home/adamsrl/.config/emacs/elpa/org-20200810/org-crypt hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-crypt
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-org hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-org
/home/adamsrl/.config/emacs/elpa/org-20200810/org-clock hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-clock
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ruby hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ruby
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-fortran hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-fortran
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-docview hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-docview
/home/adamsrl/.config/emacs/elpa/org-20200810/org-pcomplete hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-pcomplete
/home/adamsrl/.config/emacs/elpa/org-20200810/org-macro hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-macro
/home/adamsrl/.config/emacs/elpa/org-20200810/org-element hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-element
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ditaa hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ditaa
/home/adamsrl/.config/emacs/elpa/org-20200810/org-table hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-table
/home/adamsrl/.config/emacs/elpa/org-20200810/ob hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-mscgen hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-mscgen
/home/adamsrl/.config/emacs/elpa/org-20200810/org-footnote hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-footnote
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-eww hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-eww
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lob hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lob
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-haskell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-haskell
/home/adamsrl/.config/emacs/elpa/org-20200810/org-faces hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-faces
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-md hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-md
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-table hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-table
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-awk hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-awk
/home/adamsrl/.config/emacs/elpa/org-20200810/org-mobile hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-mobile
/home/adamsrl/.config/emacs/elpa/org-20200810/org-archive hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-archive
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ref hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ref
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-emacs-lisp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-emacs-lisp
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-dot hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-dot
/home/adamsrl/.config/emacs/elpa/org-20200810/org-duration hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-duration
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-js hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-js
/home/adamsrl/.config/emacs/elpa/org-20200810/org hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-beamer hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-beamer
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-ascii hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-ascii
/home/adamsrl/.config/emacs/elpa/org-20200810/org-loaddefs hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-loaddefs
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-shell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-shell
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-scheme hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-scheme
/home/adamsrl/.config/emacs/elpa/org-20200810/org-entities hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-entities
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ebnf hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ebnf
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-plantuml hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-plantuml
/home/adamsrl/.config/emacs/elpa/org-20200810/org-keys hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-keys
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lilypond hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lilypond
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-C hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-C
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-J hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-J
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-mhe hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-mhe
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-info hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-info
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sed hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sed
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lua hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lua
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-octave hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-octave
/home/adamsrl/.config/emacs/elpa/org-20200810/org-attach-git hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-attach-git
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-forth hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-forth
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-w3m hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-w3m
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ledger hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ledger
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-screen hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-screen
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-java hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-java
/home/adamsrl/.config/emacs/elpa/org-20200810/org-datetree hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-datetree
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sqlite hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sqlite
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-shen hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-shen
/home/adamsrl/.config/emacs/elpa/org-20200810/org-id hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-id
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-asymptote hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-asymptote
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-html hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-html
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-io hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-io
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-man hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-man
/home/adamsrl/.config/emacs/elpa/org-20200810/org-feed hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-feed
/home/adamsrl/.config/emacs/elpa/org-20200810/org-protocol hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-protocol
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-eshell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-eshell
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-texinfo hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-texinfo
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-core hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-core
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-clojure hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-clojure
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-R hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-R
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-icalendar hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-icalendar
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-picolisp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-picolisp
/home/adamsrl/.config/emacs/elpa/org-20200810/org-plot hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-plot
/home/adamsrl/.config/emacs/elpa/org-20200810/org-compat hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-compat
/home/adamsrl/.config/emacs/elpa/org-20200810/org-capture hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-capture
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-bbdb hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-bbdb
/home/adamsrl/.config/emacs/elpa/org-20200810/org-inlinetask hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-inlinetask
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-eshell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-eshell
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-css hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-css
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-processing hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-processing
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sass hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sass
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-odt hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-odt
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-gnus hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-gnus
Features:
(shadow sort mail-extr warnings emacsbug time org-num org-tempo tempo
org-protocol org-mouse org-mobile org-indent org-goto org-feed org-crypt
org-attach lisp-mnt mm-archive org-archive timezone gnutls
network-stream url-cache org-clock conf-mode image-file ffap cal-move
tabify dabbrev ob-org help-fns radix-tree sh-script executable log-edit
pcvs-util add-log smerge-mode diff vc helm-command helm-elisp helm-eval
edebug backtrace mule-util misearch multi-isearch vc-git sendmail
term/rxvt term/screen term/xterm xterm rx mhtml-mode css-mode-expansions
css-mode smie eww mm-url url-queue js-mode-expansions js
cc-mode-expansions cc-mode cc-fonts cc-guess cc-menus cc-cmds cc-styles
cc-align cc-engine cc-vars cc-defs html-mode-expansions sgml-mode winner
recentf tree-widget helm-x-files org-duration cal-iso vc-dispatcher
vc-hg diff-mode flyspell ispell ol-eww ol-rmail ol-mhe ol-irc ol-info
ol-gnus nnir ol-docview doc-view ol-bibtex bibtex ol-bbdb ol-w3m
face-remap org-agenda server company-oddmuse company-keywords
company-etags company-gtags company-dabbrev-code company-dabbrev
company-files company-clang company-capf company-cmake company-semantic
company-template company-bbdb org-caldav org-id url-dav url-http
url-auth url-gw nsm pdf-occur ibuf-ext ibuffer ibuffer-loaddefs tablist
tablist-filter semantic/wisent/comp semantic/wisent
semantic/wisent/wisent semantic/util-modes semantic/util semantic
semantic/tag semantic/lex semantic/fw mode-local cedet pdf-isearch
let-alist pdf-misc imenu pdf-tools cus-edit cus-start cus-load pdf-view
jka-compr pdf-cache pdf-info tq pdf-util image-mode exif org-noter
ox-odt rng-loc rng-uri rng-parse rng-match rng-dt rng-util rng-pttrn
nxml-parse nxml-ns nxml-enc xmltok nxml-util ox-latex ox-icalendar
ox-html table ox-ascii ox-publish ox org-element avl-tree gnus-icalendar
org-capture gnus-art mm-uu mml2015 mm-view mml-smime smime dig gnus-sum
shr svg dom gnus-group gnus-undo gnus-start gnus-cloud nnimap nnmail
mail-source utf7 netrc nnoo gnus-spec gnus-int gnus-range message rmc
puny dired dired-loaddefs rfc822 mml mml-sec epa derived epg epg-config
mailabbrev mailheader gnus-win gnus nnheader gnus-util rmail
rmail-loaddefs mail-utils wid-edit mm-decode mm-bodies mm-encode
mail-parse rfc2231 rfc2047 rfc2045 mm-util ietf-drums mail-prsvr
gmm-utils icalendar ob-sql ob-shell skeleton appt diary-lib
diary-loaddefs slime-fancy slime-indentation slime-cl-indent cl-indent
slime-trace-dialog slime-fontifying-fu slime-package-fu slime-references
slime-compiler-notes-tree slime-scratch slime-presentations bridge
slime-macrostep macrostep slime-mdot-fu slime-enclosing-context
slime-fuzzy slime-fancy-trace slime-fancy-inspector slime-c-p-c
slime-editing-commands slime-autodoc slime-repl slime-parse slime
compile etags fileloop generator xref project arc-mode archive-mode
hyperspec orgalist the-org-mode-expansions org ob ob-tangle ob-ref
ob-lob ob-table ob-exp org-macro org-footnote org-src ob-comint
org-pcomplete org-list org-faces org-entities noutline outline
org-version ob-emacs-lisp ob-core ob-eval org-table ol org-keys
org-compat org-macs org-loaddefs find-func cal-menu calendar
cal-loaddefs helm-recoll helm-for-files helm-bookmark helm-adaptive
helm-info bookmark text-property-search pp helm-external helm-net xml
url url-proxy url-privacy url-expand url-methods url-history url-cookie
url-domsuf url-util mailcap ido helm-mode helm-files helm-buffers
helm-occur helm-tags helm-locate helm-grep helm-regexp helm-utils
helm-help helm-types helm async-bytecomp helm-global-bindings
helm-easymenu helm-source eieio-compat helm-multi-match helm-lib async
helm-config vc-fossil expand-region text-mode-expansions
er-basic-expansions expand-region-core expand-region-custom company
pcase multiple-cursors mc-hide-unmatched-lines-mode
mc-separate-operations rectangular-region-mode mc-mark-pop mc-mark-more
thingatpt mc-cycle-cursors mc-edit-lines multiple-cursors-core advice
rect paredit htmlize monky tramp tramp-loaddefs trampver
tramp-integration files-x tramp-compat shell pcomplete comint ansi-color
ring parse-time iso8601 time-date ls-lisp format-spec view ediff
ediff-merg ediff-mult ediff-wind ediff-diff ediff-help ediff-init
ediff-util bindat cl color rainbow-delimiters cl-extra help-mode paren
edmacro kmacro savehist dracula-theme hl-line use-package
use-package-ensure use-package-delight use-package-diminish
use-package-bind-key bind-key easy-mmode use-package-core finder-inf
slime-autoloads info package easymenu browse-url url-handlers url-parse
auth-source cl-seq eieio eieio-core cl-macs eieio-loaddefs
password-cache json subr-x map url-vars seq byte-opt gv bytecomp
byte-compile cconv cl-loaddefs cl-lib tooltip eldoc electric uniquify
ediff-hook vc-hooks lisp-float-type mwheel term/x-win x-win
term/common-win x-dnd tool-bar dnd fontset image regexp-opt fringe
tabulated-list replace newcomment text-mode elisp-mode lisp-mode
prog-mode register page tab-bar menu-bar rfn-eshadow isearch timer
select scroll-bar mouse jit-lock font-lock syntax facemenu font-core
term/tty-colors frame minibuffer cl-generic cham georgian utf-8-lang
misc-lang vietnamese tibetan thai tai-viet lao korean japanese eucjp-ms
cp51932 hebrew greek romanian slovak czech european ethiopic indian
cyrillic chinese composite charscript charprop case-table epa-hook
jka-cmpr-hook help simple abbrev obarray cl-preloaded nadvice loaddefs
button faces cus-face macroexp files text-properties overlay sha1 md5
base64 format env code-pages mule custom widget hashtable-print-readable
backquote threads inotify lcms2 dynamic-setting system-font-setting
font-render-setting x-toolkit x multi-tty make-network-process emacs)
Memory information:
((conses 16 1997471 1645948)
(symbols 48 52500 1)
(strings 32 328202 267401)
(string-bytes 1 10837531)
(vectors 16 133457)
(vector-slots 8 2460308 965956)
(floats 8 808 4810)
(intervals 56 184154 78227)
(buffers 1000 129))
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-15 17:54 ` Russell Adams
@ 2020-09-15 18:52 ` Eli Zaretskii
2020-09-15 21:12 ` Russell Adams
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-09-15 18:52 UTC (permalink / raw)
To: Russell Adams; +Cc: 43389
> Date: Tue, 15 Sep 2020 19:54:18 +0200
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> htop says my emacs RSS is now 5148MB. I ran M-x garbage-collect and it ran
> at 100% cpu for 5 minutes and released nothing. I also tried manually
> executing (clear-image-cache) and nothing.
Can you use some utility that produces a memory map of an application,
and see how much of those 5GB are actually free for allocation by
Emacs? Also, do you see any libraries used by Emacs that have high
memory usage?
> I run Emacs 27.1 as a daemon, uptime 4 days, 3 hours, 22 minutes, 53
> seconds. Yesterday conki was reporting Emacs at 28% memory usage,
> today it's at 33%.
28% and 33% of what amount?
If your RSS is 5GB after 4 days of uptime, and the memory footprint
grows at a constant rate, it would mean more than 1GB per day. But
I'm guessing that 33% - 28% = 5% of your total memory is much less
than 1GB. In which case the memory footprint must sometimes jump by
very large amounts, not grow slowly and monotonically each day.
Right? So which events cause those sudden increases in RSS?
Also, what is your value of gc-cons-threshold, and do you have some
customizations that change its value under some conditions? If so,
please tell the details.
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-15 18:52 ` Eli Zaretskii
@ 2020-09-15 21:12 ` Russell Adams
2020-09-16 14:52 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Russell Adams @ 2020-09-15 21:12 UTC (permalink / raw)
To: 43389
On Tue, Sep 15, 2020 at 09:52:45PM +0300, Eli Zaretskii wrote:
> > htop says my emacs RSS is now 5148MB. I ran M-x garbage-collect and it ran
>
> Can you use some utility that produces a memory map of an application,
> and see how much of those 5GB are actually free for allocation by
> Emacs?
Any suggestions? I still have it running. I used htop because it shows
a sane total value.
> Also, do you see any libraries used by Emacs that have high
> memory usage?
Emacs is the top memory usage on my laptop, firefox is second at
2GB. The rest are <1G.
> 28% and 33% of what amount?
16GB
> If your RSS is 5GB after 4 days of uptime, and the memory footprint
> grows at a constant rate, it would mean more than 1GB per day. But
> I'm guessing that 33% - 28% = 5% of your total memory is much less
> than 1GB.
No, 33% is ~5GB. ;]
> In which case the memory footprint must sometimes jump by
> very large amounts, not grow slowly and monotonically each day.
> Right? So which events cause those sudden increases in RSS?
I can't say. I have a few megs total in buffers open, and I've run
org-caldav a few times to upload. Mostly org-mode buffers open, a few
mail buffers (not gnus, just mail-mode editing mutt files), package
list, and cruft. Not actively doing any development, just editing Org
files.
I don't recall having edited any huge files in the last 4 days.
> Also, what is your value of gc-cons-threshold, and do you have some
> customizations that change its value under some conditions? If so,
> please tell the details.
gc-cons-threshold is 800000 (#o3032400, #xc3500).
No customization that I'm aware of to memory. The only thing that may
be relative is my savehist settings, but that file is only 98k (down
from 500meg in emacs 26). I've now limited my savehists.
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-15 21:12 ` Russell Adams
@ 2020-09-16 14:52 ` Eli Zaretskii
2020-09-17 20:47 ` Russell Adams
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-09-16 14:52 UTC (permalink / raw)
To: Russell Adams; +Cc: 43389
> Date: Tue, 15 Sep 2020 23:12:09 +0200
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> > Can you use some utility that produces a memory map of an application,
> > and see how much of those 5GB are actually free for allocation by
> > Emacs?
>
> Any suggestions?
Your Internet search is as good as mine. This page offers some
possibilities:
https://stackoverflow.com/questions/36523584/how-to-see-memory-layout-of-my-program-in-c-during-run-time
> > Also, do you see any libraries used by Emacs that have high
> > memory usage?
>
> Emacs is the top memory usage on my laptop, firefox is second at
> 2GB. The rest are <1G.
No, I meant the shared libraries that Emacs loads. Maybe one of them
has a leak, not Emacs's own code.
> > 28% and 33% of what amount?
>
> 16GB
>
> > If your RSS is 5GB after 4 days of uptime, and the memory footprint
> > grows at a constant rate, it would mean more than 1GB per day. But
> > I'm guessing that 33% - 28% = 5% of your total memory is much less
> > than 1GB.
>
> No, 33% is ~5GB. ;]
>
> > In which case the memory footprint must sometimes jump by
> > very large amounts, not grow slowly and monotonically each day.
> > Right? So which events cause those sudden increases in RSS?
>
> I can't say.
Well, actually the above seems to indicate that your memory footprint
grows by about 1GB each day: 5% of 16GB is 0.8GB. So maybe my guess
is wrong, and the memory does increase roughly linearly with time.
Hmm...
We had a discussion several times regarding the possible effects of
the fact that glibc doesn't return malloc'ed memory to the system. I
don't think we reached any firm conclusions about that, but it could
be that some usage patterns cause memory fragmentation, whereby small
chunks of free'd memory gets "trapped" between regions of used memory,
and cannot be reallocated.
We used to use some specialized malloc features to prevent this, but
AFAIU they are no longer supported on modern GNU/Linux systems.
Not sure whether this is relevant to what you see.
Anyway, I think the way forward is to try to understand which code
"owns" the bulk of the 5GB memory. Then maybe we will have some
ideas.
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-16 14:52 ` Eli Zaretskii
@ 2020-09-17 20:47 ` Russell Adams
2020-09-17 21:58 ` Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors
` (2 more replies)
0 siblings, 3 replies; 110+ messages in thread
From: Russell Adams @ 2020-09-17 20:47 UTC (permalink / raw)
To: 43389
From Emacs memory-usage package:
Garbage collection stats:
((conses 16 1912248 251798) (symbols 48 54872 19) (strings 32 327552 81803) (string-bytes 1 12344346) (vectors 16 158994) (vector-slots 8 2973919 339416) (floats 8 992 4604) (intervals 56 182607 7492) (buffers 1000 195))
=> 29.2MB (+ 3.84MB dead) in conses
2.51MB (+ 0.89kB dead) in symbols
10.00MB (+ 2.50MB dead) in strings
11.8MB in string-bytes
2.43MB in vectors
22.7MB (+ 2.59MB dead) in vector-slots
7.75kB (+ 36.0kB dead) in floats
9.75MB (+ 410kB dead) in intervals
190kB in buffers
Total in lisp objects: 97.9MB (live 88.5MB, dead 9.36MB)
Buffer ralloc memory usage:
81 buffers
4.71MB total (1007kB in gaps)
----------------------------------------------------------------------
And /proc/PID/smaps which is huge so I pastebinned it.
https://termbin.com/2sx5
Of interest is:
56413d24a000-5642821c6000 rw-p 00000000 00:00 0 [heap]
Size: 5324272 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 5245496 kB
Pss: 5245496 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 5245496 kB
Referenced: 5245496 kB
Anonymous: 5245496 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
ShmemPmdMapped: 0 kB
FilePmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
Locked: 0 kB
THPeligible: 0
VmFlags: rd wr mr mw me ac
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-17 20:47 ` Russell Adams
@ 2020-09-17 21:58 ` Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors
2020-09-17 23:09 ` Russell Adams
2020-09-18 6:56 ` Eli Zaretskii
2020-09-18 8:22 ` Eli Zaretskii
2020-11-26 15:42 ` Russell Adams
2 siblings, 2 replies; 110+ messages in thread
From: Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors @ 2020-09-17 21:58 UTC (permalink / raw)
To: 43389
Over in #guix irc, the guix people seemed to think it was a memory leak with helm.
I was watching my emacs consume about 0.1% more system memory every 2 or 3 seconds. Setting
(setq helm-ff-keep-cached-candidates nil)
Seemed to make the problem go away.
I also made a video, where I watched this memory usage continually go up
and then stay steady after I turned off helm-ff-keep-cached-candidates.
This happens at about the 35 minute mark.
https://video.hardlimit.com/videos/watch/3069e16a-d75c-4e40-8686-9102e40e333f
And here's the bug report on guix system:
https://issues.guix.gnu.org/43406#10
--
Joshua Branson
Sent from Emacs and Gnus
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-17 21:58 ` Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors
@ 2020-09-17 23:09 ` Russell Adams
2020-09-18 6:56 ` Eli Zaretskii
1 sibling, 0 replies; 110+ messages in thread
From: Russell Adams @ 2020-09-17 23:09 UTC (permalink / raw)
To: 43389
I haven't tried to recreate yet, I still have it open. Monitoring if
it grows, and hoping to find something useful in the existing process.
On Thu, Sep 17, 2020 at 05:58:51PM -0400, Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors wrote:
>
> Over in #guix irc, the guix people seemed to think it was a memory leak with helm.
>
> I was watching my emacs consume about 0.1% more system memory every 2 or 3 seconds. Setting
>
> (setq helm-ff-keep-cached-candidates nil)
>
> Seemed to make the problem go away.
>
> I also made a video, where I watched this memory usage continually go up
> and then stay steady after I turned off helm-ff-keep-cached-candidates.
> This happens at about the 35 minute mark.
>
> https://video.hardlimit.com/videos/watch/3069e16a-d75c-4e40-8686-9102e40e333f
>
> And here's the bug report on guix system:
>
> https://issues.guix.gnu.org/43406#10
>
>
> --
> Joshua Branson
> Sent from Emacs and Gnus
>
>
>
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-17 21:58 ` Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors
2020-09-17 23:09 ` Russell Adams
@ 2020-09-18 6:56 ` Eli Zaretskii
2020-09-18 7:53 ` Robert Pluim
1 sibling, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-09-18 6:56 UTC (permalink / raw)
To: Joshua Branson; +Cc: 43389
> Date: Thu, 17 Sep 2020 17:58:51 -0400
> From: Joshua Branson via "Bug reports for GNU Emacs,
> the Swiss army knife of text editors" <bug-gnu-emacs@gnu.org>
>
>
> Over in #guix irc, the guix people seemed to think it was a memory leak with helm.
Thanks.
But if it's due to helm, why doesn't the huge memory usage show in the
report produced by GC? That report should show all the Lisp object
that we allocate and manage, no? Where does helm-ff-cache keeps those
"candidates"? (And what is this cache, if someone could be kind
enough to describe it?)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-18 6:56 ` Eli Zaretskii
@ 2020-09-18 7:53 ` Robert Pluim
2020-09-18 8:13 ` Eli Zaretskii
2020-09-20 20:08 ` jbranso--- via Bug reports for GNU Emacs, the Swiss army knife of text editors
0 siblings, 2 replies; 110+ messages in thread
From: Robert Pluim @ 2020-09-18 7:53 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, Joshua Branson
>>>>> On Fri, 18 Sep 2020 09:56:14 +0300, Eli Zaretskii <eliz@gnu.org> said:
>> Date: Thu, 17 Sep 2020 17:58:51 -0400
>> From: Joshua Branson via "Bug reports for GNU Emacs,
>> the Swiss army knife of text editors" <bug-gnu-emacs@gnu.org>
>>
>>
>> Over in #guix irc, the guix people seemed to think it was a memory leak with helm.
Eli> Thanks.
Eli> But if it's due to helm, why doesn't the huge memory usage show in the
Eli> report produced by GC? That report should show all the Lisp object
Eli> that we allocate and manage, no? Where does helm-ff-cache keeps those
Eli> "candidates"? (And what is this cache, if someone could be kind
Eli> enough to describe it?)
Itʼs a hash table. It caches directory contents, as far as I can tell.
Robert
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-18 7:53 ` Robert Pluim
@ 2020-09-18 8:13 ` Eli Zaretskii
2020-09-20 20:08 ` jbranso--- via Bug reports for GNU Emacs, the Swiss army knife of text editors
1 sibling, 0 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-09-18 8:13 UTC (permalink / raw)
To: Robert Pluim; +Cc: 43389, jbranso
> From: Robert Pluim <rpluim@gmail.com>
> Cc: Joshua Branson <jbranso@dismail.de>, 43389@debbugs.gnu.org
> Date: Fri, 18 Sep 2020 09:53:59 +0200
>
> Eli> But if it's due to helm, why doesn't the huge memory usage show in the
> Eli> report produced by GC? That report should show all the Lisp object
> Eli> that we allocate and manage, no? Where does helm-ff-cache keeps those
> Eli> "candidates"? (And what is this cache, if someone could be kind
> Eli> enough to describe it?)
>
> Itʼs a hash table. It caches directory contents, as far as I can tell.
Then its memory usage should be part of the GC report, no?
I guess, if this helm feature is really the culprit, then the growth
of memory footprint is not due to the hash-table itself, but to
something else, which is not a Lisp object and gets allocated via
direct calls to malloc or something?
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-18 7:53 ` Robert Pluim
2020-09-18 8:13 ` Eli Zaretskii
@ 2020-09-20 20:08 ` jbranso--- via Bug reports for GNU Emacs, the Swiss army knife of text editors
1 sibling, 0 replies; 110+ messages in thread
From: jbranso--- via Bug reports for GNU Emacs, the Swiss army knife of text editors @ 2020-09-20 20:08 UTC (permalink / raw)
To: Eli Zaretskii, Robert Pluim; +Cc: 43389
Maybe I spoke a little too soon. I just saw two related bug reports and thought I would connect them. Ludo actually closed the bug in Guix System. He confirmed that for him, helm seemed to be the problem.
September 18, 2020 4:12 AM, "Eli Zaretskii" <eliz@gnu.org> wrote:
>> From: Robert Pluim <rpluim@gmail.com>
>> Cc: Joshua Branson <jbranso@dismail.de>, 43389@debbugs.gnu.org
>> Date: Fri, 18 Sep 2020 09:53:59 +0200
>>
>> Eli> But if it's due to helm, why doesn't the huge memory usage show in the
>> Eli> report produced by GC? That report should show all the Lisp object
>> Eli> that we allocate and manage, no? Where does helm-ff-cache keeps those
>> Eli> "candidates"? (And what is this cache, if someone could be kind
>> Eli> enough to describe it?)
>>
>> Itʼs a hash table. It caches directory contents, as far as I can tell.
>
> Then its memory usage should be part of the GC report, no?
>
> I guess, if this helm feature is really the culprit, then the growth
> of memory footprint is not due to the hash-table itself, but to
> something else, which is not a Lisp object and gets allocated via
> direct calls to malloc or something?
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-17 20:47 ` Russell Adams
2020-09-17 21:58 ` Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors
@ 2020-09-18 8:22 ` Eli Zaretskii
2020-11-09 20:46 ` Michael Heerdegen
2020-11-26 15:42 ` Russell Adams
2 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-09-18 8:22 UTC (permalink / raw)
To: Russell Adams; +Cc: 43389
> Date: Thu, 17 Sep 2020 22:47:04 +0200
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> >From Emacs memory-usage package:
>
> Garbage collection stats:
> ((conses 16 1912248 251798) (symbols 48 54872 19) (strings 32 327552 81803) (string-bytes 1 12344346) (vectors 16 158994) (vector-slots 8 2973919 339416) (floats 8 992 4604) (intervals 56 182607 7492) (buffers 1000 195))
>
> => 29.2MB (+ 3.84MB dead) in conses
> 2.51MB (+ 0.89kB dead) in symbols
> 10.00MB (+ 2.50MB dead) in strings
> 11.8MB in string-bytes
> 2.43MB in vectors
> 22.7MB (+ 2.59MB dead) in vector-slots
> 7.75kB (+ 36.0kB dead) in floats
> 9.75MB (+ 410kB dead) in intervals
> 190kB in buffers
>
> Total in lisp objects: 97.9MB (live 88.5MB, dead 9.36MB)
>
> Buffer ralloc memory usage:
> 81 buffers
> 4.71MB total (1007kB in gaps)
>
> ----------------------------------------------------------------------
>
> And /proc/PID/smaps which is huge so I pastebinned it.
>
> https://termbin.com/2sx5
Thanks.
> 56413d24a000-5642821c6000 rw-p 00000000 00:00 0 [heap]
> Size: 5324272 kB
> KernelPageSize: 4 kB
> MMUPageSize: 4 kB
> Rss: 5245496 kB
> Pss: 5245496 kB
> Shared_Clean: 0 kB
> Shared_Dirty: 0 kB
> Private_Clean: 0 kB
> Private_Dirty: 5245496 kB
> Referenced: 5245496 kB
> Anonymous: 5245496 kB
> LazyFree: 0 kB
> AnonHugePages: 0 kB
> ShmemPmdMapped: 0 kB
> FilePmdMapped: 0 kB
> Shared_Hugetlb: 0 kB
> Private_Hugetlb: 0 kB
> Swap: 0 kB
> SwapPss: 0 kB
> Locked: 0 kB
> THPeligible: 0
> VmFlags: rd wr mr mw me ac
So it seems to be our heap that takes most of the 5GB.
It might be interesting to see which operations/commands cause this
part to increase.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-18 8:22 ` Eli Zaretskii
@ 2020-11-09 20:46 ` Michael Heerdegen
2020-11-09 21:24 ` Michael Heerdegen
` (2 more replies)
0 siblings, 3 replies; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-09 20:46 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, Russell Adams
Eli Zaretskii <eliz@gnu.org> writes:
> So it seems to be our heap that takes most of the 5GB.
Today it happened again to me. I'm writing from an Emacs session using
more than 5 GB of memory. I've started it some hours ago and have no
clue why today had been special. I didn't do anything exceptional.
Here is output from memory-usage:
Garbage collection stats:
((conses 16 2645730 3784206) (symbols 48 68678 724) (strings 32 528858 451889) (string-bytes 1 18127696) (vectors 16 213184) (vector-slots 8 3704641 2189052) (floats 8 2842 5514) (intervals 56 264780 87057) (buffers 992 119))
=> 40.4MB (+ 57.7MB dead) in conses
3.14MB (+ 33.9kB dead) in symbols
16.1MB (+ 13.8MB dead) in strings
17.3MB in string-bytes
3.25MB in vectors
28.3MB (+ 16.7MB dead) in vector-slots
22.2kB (+ 43.1kB dead) in floats
14.1MB (+ 4.65MB dead) in intervals
115kB in buffers
Total in lisp objects: 216MB (live 123MB, dead 93.0MB)
Buffer ralloc memory usage:
119 buffers
16.1MB total (1.71MB in gaps)
Anything I can do to find out more? I dunno how long I can keep this
session open. Tried `clear-image-cache', it does not release any
memory.
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-09 20:46 ` Michael Heerdegen
@ 2020-11-09 21:24 ` Michael Heerdegen
2020-11-09 21:51 ` Michael Heerdegen
2020-11-09 22:33 ` Jean Louis
2020-11-10 3:30 ` Eli Zaretskii
2 siblings, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-09 21:24 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, Russell Adams
Michael Heerdegen <michael_heerdegen@web.de> writes:
> Anything I can do to find out more? I dunno how long I can keep this
> session open. Tried `clear-image-cache', it does not release any
> memory.
I found this line in pmap output:
0000557322314000 6257824K rw--- [ anon ]
Is it relevant?
Thanks,
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-09 21:24 ` Michael Heerdegen
@ 2020-11-09 21:51 ` Michael Heerdegen
2020-11-10 3:36 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-09 21:51 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, Russell Adams
Michael Heerdegen <michael_heerdegen@web.de> writes:
> I found this line in pmap output:
>
> 0000557322314000 6257824K rw--- [ anon ]
I guess that's the heap again.
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-09 21:51 ` Michael Heerdegen
@ 2020-11-10 3:36 ` Eli Zaretskii
2020-11-10 8:22 ` Andreas Schwab
2020-11-10 10:25 ` Michael Heerdegen
0 siblings, 2 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 3:36 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams
> From: Michael Heerdegen <michael_heerdegen@web.de>
> Cc: 43389@debbugs.gnu.org, Russell Adams <RLAdams@AdamsInfoServ.Com>
> Date: Mon, 09 Nov 2020 22:51:10 +0100
>
> Michael Heerdegen <michael_heerdegen@web.de> writes:
>
> > I found this line in pmap output:
> >
> > 0000557322314000 6257824K rw--- [ anon ]
>
> I guess that's the heap again.
Yes, the heap. So it more and more looks like this is the result of
glibc not releasing memory to the system, which with some usage
patterns causes the memory footprint grow to ludicrous size.
We need to find an expert on this and bring him aboard for finding a
solution.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 3:36 ` Eli Zaretskii
@ 2020-11-10 8:22 ` Andreas Schwab
2020-11-10 12:59 ` Michael Heerdegen
2020-11-10 15:53 ` Eli Zaretskii
2020-11-10 10:25 ` Michael Heerdegen
1 sibling, 2 replies; 110+ messages in thread
From: Andreas Schwab @ 2020-11-10 8:22 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: Michael Heerdegen, 43389, RLAdams
On Nov 10 2020, Eli Zaretskii wrote:
>> From: Michael Heerdegen <michael_heerdegen@web.de>
>> Cc: 43389@debbugs.gnu.org, Russell Adams <RLAdams@AdamsInfoServ.Com>
>> Date: Mon, 09 Nov 2020 22:51:10 +0100
>>
>> Michael Heerdegen <michael_heerdegen@web.de> writes:
>>
>> > I found this line in pmap output:
>> >
>> > 0000557322314000 6257824K rw--- [ anon ]
>>
>> I guess that's the heap again.
>
> Yes, the heap. So it more and more looks like this is the result of
> glibc not releasing memory to the system, which with some usage
> patterns causes the memory footprint grow to ludicrous size.
The heap can only shrink if you free memory at the end of it, so there
is nothing wrong here.
You can call malloc_info (0, stdout) to see the state of the heap.
Andreas.
--
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 8:22 ` Andreas Schwab
@ 2020-11-10 12:59 ` Michael Heerdegen
2020-11-10 13:01 ` Andreas Schwab
2020-11-10 15:53 ` Eli Zaretskii
1 sibling, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 12:59 UTC (permalink / raw)
To: Andreas Schwab; +Cc: 43389, RLAdams
Andreas Schwab <schwab@linux-m68k.org> writes:
> You can call malloc_info (0, stdout) to see the state of the heap.
Was that meant for me? If yes: where do I call this? gdb?
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 12:59 ` Michael Heerdegen
@ 2020-11-10 13:01 ` Andreas Schwab
2020-11-10 13:10 ` Michael Heerdegen
0 siblings, 1 reply; 110+ messages in thread
From: Andreas Schwab @ 2020-11-10 13:01 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams
On Nov 10 2020, Michael Heerdegen wrote:
> Andreas Schwab <schwab@linux-m68k.org> writes:
>
>> You can call malloc_info (0, stdout) to see the state of the heap.
>
> Was that meant for me? If yes: where do I call this? gdb?
Yes, as long as you are not stopped inside malloc.
Andreas.
--
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 13:01 ` Andreas Schwab
@ 2020-11-10 13:10 ` Michael Heerdegen
2020-11-10 13:20 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 13:10 UTC (permalink / raw)
To: Andreas Schwab; +Cc: 43389, RLAdams
Andreas Schwab <schwab@linux-m68k.org> writes:
> Yes, as long as you are not stopped inside malloc.
My gdb session looks like this:
[...]
Attaching to process 416219
[New LWP 416220]
[New LWP 416221]
[New LWP 416223]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f3eae76e926 in pselect () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) malloc_info (0, stdout)
Undefined command: "malloc_info". Try "help".
I guess I have an optimized build. Anything I can do better than above?
Thx, Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 13:10 ` Michael Heerdegen
@ 2020-11-10 13:20 ` Eli Zaretskii
2020-11-10 13:26 ` Michael Heerdegen
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 13:20 UTC (permalink / raw)
To: Michael Heerdegen, Andreas Schwab; +Cc: 43389, RLAdams
On November 10, 2020 3:10:20 PM GMT+02:00, Michael Heerdegen <michael_heerdegen@web.de> wrote:
> Andreas Schwab <schwab@linux-m68k.org> writes:
>
> > Yes, as long as you are not stopped inside malloc.
>
> My gdb session looks like this:
>
> [...]
> Attaching to process 416219
> [New LWP 416220]
> [New LWP 416221]
> [New LWP 416223]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library
> "/lib/x86_64-linux-gnu/libthread_db.so.1".
> 0x00007f3eae76e926 in pselect () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) malloc_info (0, stdout)
> Undefined command: "malloc_info". Try "help".
>
> I guess I have an optimized build. Anything I can do better than
> above?
Try this instead:
(gdb) call malloc_info(0, stdout)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 13:20 ` Eli Zaretskii
@ 2020-11-10 13:26 ` Michael Heerdegen
2020-11-10 14:25 ` Michael Heerdegen
2020-11-10 15:34 ` Eli Zaretskii
0 siblings, 2 replies; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 13:26 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams, Andreas Schwab
Eli Zaretskii <eliz@gnu.org> writes:
> Try this instead:
>
> (gdb) call malloc_info(0, stdout)
Hmm:
(gdb) call malloc_info(0, stdout)
'malloc_info' has unknown return type; cast the call to its declared return type
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 13:26 ` Michael Heerdegen
@ 2020-11-10 14:25 ` Michael Heerdegen
2020-11-10 15:36 ` Eli Zaretskii
2020-11-10 15:34 ` Eli Zaretskii
1 sibling, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 14:25 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams, Andreas Schwab
Michael Heerdegen <michael_heerdegen@web.de> writes:
> Hmm:
>
> (gdb) call malloc_info(0, stdout)
> 'malloc_info' has unknown return type; cast the call to its declared
> return type
BTW, because I'm such a C noob, I can also offer to give me a (phone or
Signal) call if you are interested, maybe that's more efficient.
Maybe Andreas could do that if he speaks German (?) (I speak English to
some degree: you can understand me and I will understand the most from
you, but it's not good enough to prevent RMS making jokes about my
language from time to time.)
I'm also watching my mailbox all the time of course.
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 14:25 ` Michael Heerdegen
@ 2020-11-10 15:36 ` Eli Zaretskii
2020-11-10 17:44 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 15:36 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams, schwab
> From: Michael Heerdegen <michael_heerdegen@web.de>
> Cc: Andreas Schwab <schwab@linux-m68k.org>, 43389@debbugs.gnu.org,
> RLAdams@AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 15:25:44 +0100
>
> > (gdb) call malloc_info(0, stdout)
> > 'malloc_info' has unknown return type; cast the call to its declared
> > return type
>
> BTW, because I'm such a C noob, I can also offer to give me a (phone or
> Signal) call if you are interested, maybe that's more efficient.
If the information proves to be useful, maybe we should provide a Lisp
command to call that function. It could be instrumental in asking
people who see this problem report their results.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 15:36 ` Eli Zaretskii
@ 2020-11-10 17:44 ` Eli Zaretskii
2020-11-10 18:55 ` Michael Heerdegen
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 17:44 UTC (permalink / raw)
To: michael_heerdegen; +Cc: 43389, RLAdams, schwab
> Date: Tue, 10 Nov 2020 17:36:35 +0200
> From: Eli Zaretskii <eliz@gnu.org>
> Cc: 43389@debbugs.gnu.org, RLAdams@AdamsInfoServ.Com, schwab@linux-m68k.org
>
> > > (gdb) call malloc_info(0, stdout)
> > > 'malloc_info' has unknown return type; cast the call to its declared
> > > return type
> >
> > BTW, because I'm such a C noob, I can also offer to give me a (phone or
> > Signal) call if you are interested, maybe that's more efficient.
>
> If the information proves to be useful, maybe we should provide a Lisp
> command to call that function. It could be instrumental in asking
> people who see this problem report their results.
I've now added such a command to the master branch. Redirect stderr
to a file, and then invoke "M-x malloc-info RET" when you want a
memory report. The command doesn't display anything, it just writes
the info to the redirected file.
HTH
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 17:44 ` Eli Zaretskii
@ 2020-11-10 18:55 ` Michael Heerdegen
0 siblings, 0 replies; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 18:55 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams, schwab
Eli Zaretskii <eliz@gnu.org> writes:
> I've now added such a command to the master branch. Redirect stderr
> to a file, and then invoke "M-x malloc-info RET" when you want a
> memory report. The command doesn't display anything, it just writes
> the info to the redirected file.
Great, thanks, I'll use it next time when the issue happens.
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 13:26 ` Michael Heerdegen
2020-11-10 14:25 ` Michael Heerdegen
@ 2020-11-10 15:34 ` Eli Zaretskii
2020-11-10 16:49 ` Michael Heerdegen
1 sibling, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 15:34 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams, schwab
> From: Michael Heerdegen <michael_heerdegen@web.de>
> Cc: Andreas Schwab <schwab@linux-m68k.org>, 43389@debbugs.gnu.org,
> RLAdams@AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 14:26:10 +0100
>
> (gdb) call malloc_info(0, stdout)
> 'malloc_info' has unknown return type; cast the call to its declared return type
Compliance!
(gdb) call (int)malloc_info (0, stdout)
(I would actually try stderr instead of stdout, but I yield to
Andreas's expertise here.)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 15:34 ` Eli Zaretskii
@ 2020-11-10 16:49 ` Michael Heerdegen
2020-11-10 17:13 ` Eli Zaretskii
2020-12-08 1:07 ` Michael Heerdegen
0 siblings, 2 replies; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 16:49 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams, schwab
Eli Zaretskii <eliz@gnu.org> writes:
> Compliance!
>
> (gdb) call (int)malloc_info (0, stdout)
I'm very sorry, but it's gone.
I used Magit in that session to show a log buffer. That lead to memory
usage grow too much, and a daemon killed the session to avoid swapping.
Maybe the problem is even related to Magit usage. But I had a second X
session running at that moment so there was a lot less memory left on
the system when that happened.
FWIW, the only "exceptional" thing that happened yesterday had been that
Gnus one time got stalled after starting. That also can be totally
unrelated.
I'll try to start some timer that will report me live about heavily
growing memory usage so that I can recognize the problem directly when
it happens.
Regards,
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 16:49 ` Michael Heerdegen
@ 2020-11-10 17:13 ` Eli Zaretskii
2020-12-08 1:07 ` Michael Heerdegen
1 sibling, 0 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 17:13 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams, schwab
> From: Michael Heerdegen <michael_heerdegen@web.de>
> Cc: schwab@linux-m68k.org, 43389@debbugs.gnu.org, RLAdams@AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 17:49:16 +0100
>
> I'll try to start some timer that will report me live about heavily
> growing memory usage so that I can recognize the problem directly when
> it happens.
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 16:49 ` Michael Heerdegen
2020-11-10 17:13 ` Eli Zaretskii
@ 2020-12-08 1:07 ` Michael Heerdegen
2020-12-08 3:24 ` Jose A. Ortega Ruiz
2020-12-08 5:13 ` Jean Louis
1 sibling, 2 replies; 110+ messages in thread
From: Michael Heerdegen @ 2020-12-08 1:07 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams, schwab
[-- Attachment #1: Type: text/plain, Size: 641 bytes --]
Michael Heerdegen <michael_heerdegen@web.de> writes:
> > Compliance!
> >
> > (gdb) call (int)malloc_info (0, stdout)
>
> I'm very sorry, but it's gone.
Today, "it" happened again (not sure how many problems were are
discussing here, though).
I had been cleaning my web.de INBOX with Gnus. Started Gnus, deleted or
moved some messages, shut down, and repeated. Then I suddenly saw that
our problem was back, Emacs using 6GB or so. The session is gone now (I
shut it down normally). I'm sure that at least a significant part of
the problem materialized while using (more or less only) Gnus.
And here is that heap output you wanted:
[-- Attachment #2: heap.txt --]
[-- Type: text/plain, Size: 3359 bytes --]
<malloc version="1">
<heap nr="0">
<sizes>
<size from="657" to="657" total="2628" count="4"/>
<size from="673" to="673" total="2019" count="3"/>
<size from="689" to="689" total="689" count="1"/>
<size from="705" to="705" total="705" count="1"/>
<size from="721" to="721" total="721" count="1"/>
<size from="737" to="737" total="1474" count="2"/>
<size from="753" to="753" total="2259" count="3"/>
<size from="785" to="785" total="1570" count="2"/>
<size from="801" to="801" total="801" count="1"/>
<size from="817" to="817" total="817" count="1"/>
<size from="833" to="833" total="1666" count="2"/>
<size from="897" to="897" total="1794" count="2"/>
<size from="961" to="961" total="961" count="1"/>
<size from="977" to="977" total="1954" count="2"/>
<size from="993" to="993" total="993" count="1"/>
<size from="1182753" to="1182753" total="1182753" count="1"/>
<unsorted from="527265" to="527265" total="527265" count="1"/>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="30" size="1832141"/>
<system type="current" size="7946854400"/>
<system type="max" size="7946854400"/>
<aspace type="total" size="7946854400"/>
<aspace type="mprotect" size="7946854400"/>
</heap>
<heap nr="1">
<sizes>
<size from="17" to="32" total="32" count="1"/>
<size from="33" to="48" total="96" count="2"/>
<size from="65" to="80" total="80" count="1"/>
<unsorted from="481" to="657" total="1138" count="2"/>
</sizes>
<total type="fast" count="4" size="208"/>
<total type="rest" count="3" size="132722"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<heap nr="2">
<sizes>
<size from="17" to="32" total="704" count="22"/>
<size from="33" to="48" total="192" count="4"/>
<size from="97" to="112" total="112" count="1"/>
</sizes>
<total type="fast" count="27" size="1008"/>
<total type="rest" count="1" size="101424"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<heap nr="3">
<sizes>
<size from="17" to="32" total="608" count="19"/>
<size from="33" to="48" total="96" count="2"/>
<size from="97" to="112" total="112" count="1"/>
<unsorted from="513" to="513" total="513" count="1"/>
</sizes>
<total type="fast" count="22" size="816"/>
<total type="rest" count="2" size="48289"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<heap nr="4">
<sizes>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="1" size="132240"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<total type="fast" count="53" size="2032"/>
<total type="rest" count="37" size="2246816"/>
<total type="mmap" count="11" size="305704960"/>
<system type="current" size="7947395072"/>
<system type="max" size="7947395072"/>
<aspace type="total" size="7947395072"/>
<aspace type="mprotect" size="7947395072"/>
</malloc>
[-- Attachment #3: Type: text/plain, Size: 17 bytes --]
HTH,
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-08 1:07 ` Michael Heerdegen
@ 2020-12-08 3:24 ` Jose A. Ortega Ruiz
2020-12-08 12:37 ` Russell Adams
2020-12-08 5:13 ` Jean Louis
1 sibling, 1 reply; 110+ messages in thread
From: Jose A. Ortega Ruiz @ 2020-12-08 3:24 UTC (permalink / raw)
To: 43389
On Tue, Dec 08 2020, Michael Heerdegen wrote:
> shut it down normally). I'm sure that at least a significant part of
> the problem materialized while using (more or less only) Gnus.
I also have anecdotal evidence of that. Quite systematically, i start
emacs, things load, i'm around 300Mb or RAM, quite stable. Then i start
Gnus, read some groups, and, ver soon after that, while emacs is
basically idle, i can see RAM increasing by ~10Mb every ~10secs until it
reaches something like 800-900Mb.
I've checked and i think the only timer with a periodicity of 10secs
always present when that happens is undo-auto--boundary-timer.
(Sometimes there's also slack-ws-ping, which checks that a websocket
connection is open, but i think i've seen this behaviour without that
timer on).
I'm sorry i don't have the time to obtain better benchmark data. Just
mentioning the above in case it rings a bell to someone knowledgeable.
Cheers,
jao
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-08 3:24 ` Jose A. Ortega Ruiz
@ 2020-12-08 12:37 ` Russell Adams
0 siblings, 0 replies; 110+ messages in thread
From: Russell Adams @ 2020-12-08 12:37 UTC (permalink / raw)
To: 43389
On Tue, Dec 08, 2020 at 03:24:27AM +0000, Jose A. Ortega Ruiz wrote:
> On Tue, Dec 08 2020, Michael Heerdegen wrote:
>
> > shut it down normally). I'm sure that at least a significant part of
> > the problem materialized while using (more or less only) Gnus.
>
> I also have anecdotal evidence of that. Quite systematically, i start
> emacs, things load, i'm around 300Mb or RAM, quite stable. Then i start
> Gnus, read some groups, and, ver soon after that, while emacs is
> basically idle, i can see RAM increasing by ~10Mb every ~10secs until it
> reaches something like 800-900Mb.
I have consistently encountered this memory leak without a clear path
to reproducing it other than regular use over time, and I don't use
Gnus. I read mail in Mutt in another terminal window.
Thus I'm not sure Gnus is the culprit.
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-08 1:07 ` Michael Heerdegen
2020-12-08 3:24 ` Jose A. Ortega Ruiz
@ 2020-12-08 5:13 ` Jean Louis
2020-12-08 16:29 ` Michael Heerdegen
1 sibling, 1 reply; 110+ messages in thread
From: Jean Louis @ 2020-12-08 5:13 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, schwab, RLAdams
* Michael Heerdegen <michael_heerdegen@web.de> [2020-12-08 04:08]:
> Michael Heerdegen <michael_heerdegen@web.de> writes:
>
> > > Compliance!
> > >
> > > (gdb) call (int)malloc_info (0, stdout)
> >
> > I'm very sorry, but it's gone.
>
> Today, "it" happened again (not sure how many problems were are
> discussing here, though).
>
> I had been cleaning my web.de INBOX with Gnus. Started Gnus, deleted or
> moved some messages, shut down, and repeated. Then I suddenly saw that
> our problem was back, Emacs using 6GB or so. The session is gone now (I
> shut it down normally). I'm sure that at least a significant part of
> the problem materialized while using (more or less only) Gnus.
>
> And here is that heap output you wanted:
Michael, since I stopped using helm-mode always on, I still use it,
but not awlays on and I do not query system packages with helm, since
then I have not get problem of swapping hard with 5 GB and more.
I could observe that vsize is increasing as Eli asked me for that. And
I could observe slow down, like that it slows down being harder to
type. But hard disk was not working. I could do garbage collect
without waiting 40-50 minutes for function to finish. And I did not
update or changed Emacs version yet. I have all the mtraces when it
happened and also after when I stopped using helm and waiting for
developers to tell if they need those mtraces.
Now question is, do you use helm with helm mode always on?
Of course it need not be related. But it is interesting as since I
stopped using it at least I did not get swapping problem where Emacs
tries to get some memory or has troubles with it.
Especially I am thinking of the helm function helm-system-packages
which always takes longer time as it searches through many
packages. It need not be related but I do remember that I had problem
with memory hours after using that function or turning helm always
on. Since I do not use, I did not yet observe the same
problem. Usually it would be after one day.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-08 5:13 ` Jean Louis
@ 2020-12-08 16:29 ` Michael Heerdegen
2020-12-10 0:50 ` Michael Heerdegen
0 siblings, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-12-08 16:29 UTC (permalink / raw)
To: Jean Louis; +Cc: 43389, schwab, RLAdams
Jean Louis <bugs@gnu.support> writes:
> Michael, since I stopped using helm-mode always on, I still use it,
> but not awlays on and I do not query system packages with helm, since
> then I have not get problem of swapping hard with 5 GB and more.
Yesterday it was not swapping yet. I'm monitoring memory usage with
gkrellm. When it starts blinking red, which was the case yesterday,
memory starts running out. It skipped the blinking yellow state, which
means that a lot of memory must have been acquired in a short time
period.
> Now question is, do you use helm with helm mode always on?
I regularly use some Helm commands (e.g. for C-x C-f or M-x) but not
helm-mode.
> I could observe that vsize is increasing as Eli asked me for that. And
> I could observe slow down, like that it slows down being harder to
> type. But hard disk was not working. I could do garbage collect
> without waiting 40-50 minutes for function to finish.
I think we see different symptoms. I don't see any slow-down at all
(unless swapping starts, obviously). When I do M-x garbage-collect, it
finishes immediately without freeing an significant amount of memory.
> Of course it need not be related. But it is interesting as since I
> stopped using it at least I did not get swapping problem where Emacs
> tries to get some memory or has troubles with it.
>
> Especially I am thinking of the helm function helm-system-packages
> which always takes longer time as it searches through many
> packages.
I was not using this command.
Maybe our problems have a similar cause, but seems they are a bit
different.
Regards,
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-08 16:29 ` Michael Heerdegen
@ 2020-12-10 0:50 ` Michael Heerdegen
2020-12-10 5:43 ` Jean Louis
0 siblings, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-12-10 0:50 UTC (permalink / raw)
To: Jean Louis; +Cc: 43389, RLAdams, schwab
Michael Heerdegen <michael_heerdegen@web.de> writes:
> I think we see different symptoms. I don't see any slow-down at all
> (unless swapping starts, obviously). When I do M-x garbage-collect, it
> finishes immediately without freeing an significant amount of memory.
I must correct myself. While this all was definitely the case the last
time I tried to investigate this issue (one or two months ago) the
garbage-collect statement is not true anymore. I did M-x
garbage-collect today when the memory was getting short and then Emacs
froze (in the sense of "didn't respond, even to C-g"), without gkrellm
reporting much progress, so I killed it (after 20 seconds or so - aeons
for a computer).
I did not experience a slowdown, however (maybe I've faster RAM?).
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-10 0:50 ` Michael Heerdegen
@ 2020-12-10 5:43 ` Jean Louis
0 siblings, 0 replies; 110+ messages in thread
From: Jean Louis @ 2020-12-10 5:43 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams, schwab
* Michael Heerdegen <michael_heerdegen@web.de> [2020-12-10 03:51]:
> Michael Heerdegen <michael_heerdegen@web.de> writes:
>
> > I think we see different symptoms. I don't see any slow-down at all
> > (unless swapping starts, obviously). When I do M-x garbage-collect, it
> > finishes immediately without freeing an significant amount of memory.
>
> I must correct myself. While this all was definitely the case the last
> time I tried to investigate this issue (one or two months ago) the
> garbage-collect statement is not true anymore. I did M-x
> garbage-collect today when the memory was getting short and then Emacs
> froze (in the sense of "didn't respond, even to C-g"), without gkrellm
> reporting much progress, so I killed it (after 20 seconds or so - aeons
> for a computer).
>
> I did not experience a slowdown, however (maybe I've faster RAM?).
One time I waited for 36 minutes and it completed the garbage collection.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 8:22 ` Andreas Schwab
2020-11-10 12:59 ` Michael Heerdegen
@ 2020-11-10 15:53 ` Eli Zaretskii
1 sibling, 0 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 15:53 UTC (permalink / raw)
To: Andreas Schwab; +Cc: michael_heerdegen, 43389, RLAdams
> From: Andreas Schwab <schwab@linux-m68k.org>
> Cc: Michael Heerdegen <michael_heerdegen@web.de>, 43389@debbugs.gnu.org,
> RLAdams@AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 09:22:20 +0100
>
> > Yes, the heap. So it more and more looks like this is the result of
> > glibc not releasing memory to the system, which with some usage
> > patterns causes the memory footprint grow to ludicrous size.
>
> The heap can only shrink if you free memory at the end of it, so there
> is nothing wrong here.
Yes. Except that some people say once this problem starts, the memory
footprint starts growing very fast, and the question is why.
Also, perhaps Emacs could do something to prevent large amounts of
free memory from being trapped by a small allocation, by modifying
something in how we allocate memory.
(It is a pity that a problem which was solved decades ago by using
ralloc.c is back, and on GNU/Linux of all the platforms, where such
aspects of memory fragmentation aren't supposed to happen, and all the
malloc knobs we could perhaps use to avoid that were deprecated and/or
removed.)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 3:36 ` Eli Zaretskii
2020-11-10 8:22 ` Andreas Schwab
@ 2020-11-10 10:25 ` Michael Heerdegen
2020-11-10 15:55 ` Eli Zaretskii
1 sibling, 1 reply; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 10:25 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams
Eli Zaretskii <eliz@gnu.org> writes:
> Yes, the heap. So it more and more looks like this is the result of
> glibc not releasing memory to the system, which with some usage
> patterns causes the memory footprint grow to ludicrous size.
FWIW, I'm still in that session, it's still running, and since
yesterday, that session's memory use has shrunk a lot. Nearly half of
the memory that had been in use yesterday apparently has been freed now.
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 10:25 ` Michael Heerdegen
@ 2020-11-10 15:55 ` Eli Zaretskii
2020-11-10 16:41 ` Michael Heerdegen
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 15:55 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams
> From: Michael Heerdegen <michael_heerdegen@web.de>
> Cc: 43389@debbugs.gnu.org, RLAdams@AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 11:25:15 +0100
>
> FWIW, I'm still in that session, it's still running, and since
> yesterday, that session's memory use has shrunk a lot. Nearly half of
> the memory that had been in use yesterday apparently has been freed now.
So the "leak" is not permanent, as some other people here reported?
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 15:55 ` Eli Zaretskii
@ 2020-11-10 16:41 ` Michael Heerdegen
0 siblings, 0 replies; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 16:41 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams
Eli Zaretskii <eliz@gnu.org> writes:
> > FWIW, I'm still in that session, it's still running, and since
> > yesterday, that session's memory use has shrunk a lot. Nearly half of
> > the memory that had been in use yesterday apparently has been freed now.
>
> So the "leak" is not permanent, as some other people here reported?
Maybe not, or not completely. Memory usage still was gigantic, though.
Most of the time people will recognize the problem when it causes
trouble, and then they probably use to restart Emacs. Maybe most of
them did not try to continue using such a session? Only guessing. But
yes, mine did free say 2 GB of 7 GB used, without any intervention from
my side.
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-09 20:46 ` Michael Heerdegen
2020-11-09 21:24 ` Michael Heerdegen
@ 2020-11-09 22:33 ` Jean Louis
2020-11-10 15:47 ` Eli Zaretskii
2020-11-10 3:30 ` Eli Zaretskii
2 siblings, 1 reply; 110+ messages in thread
From: Jean Louis @ 2020-11-09 22:33 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, Russell Adams
* Michael Heerdegen <michael_heerdegen@web.de> [2020-11-09 23:47]:
> Eli Zaretskii <eliz@gnu.org> writes:
>
> > So it seems to be our heap that takes most of the 5GB.
>
> Today it happened again to me. I'm writing from an Emacs session using
> more than 5 GB of memory. I've started it some hours ago and have no
> clue why today had been special. I didn't do anything exceptional.
I may confirm having similar issue.
It was happening regularly under EXWM. Memory get occupied more and
more and more until it does not go any more, swapping becomes tedious
and computer becomes non-responsive. Then I had to kill it. By using
symon-mode I could see swapping of 8 GB and more. My memory is 4 GB
plus 8 GB swap currently.
This similar condition takes place only after keeping Emacs long in
memory like maybe 5-8 hours.
After putting laptop to sleep it happens more often.
When I changed to IceWM this happened only once.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-09 22:33 ` Jean Louis
@ 2020-11-10 15:47 ` Eli Zaretskii
2020-11-10 16:36 ` Michael Heerdegen
2020-11-10 19:51 ` Jean Louis
0 siblings, 2 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 15:47 UTC (permalink / raw)
To: Jean Louis; +Cc: michael_heerdegen, 43389, RLAdams
> Date: Tue, 10 Nov 2020 01:33:17 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: Eli Zaretskii <eliz@gnu.org>, 43389@debbugs.gnu.org,
> Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> It was happening regularly under EXWM. Memory get occupied more and
> more and more until it does not go any more, swapping becomes tedious
> and computer becomes non-responsive. Then I had to kill it. By using
> symon-mode I could see swapping of 8 GB and more. My memory is 4 GB
> plus 8 GB swap currently.
>
> This similar condition takes place only after keeping Emacs long in
> memory like maybe 5-8 hours.
>
> After putting laptop to sleep it happens more often.
>
> When I changed to IceWM this happened only once.
If this was due to a WM, are you sure it was Emacs that was eating up
memory, and not the WM itself? If it was Emacs, then I think the only
way it could depend on the WM is if the WM feeds Emacs with many X
events that somehow consume memory.
Michael, what WM are you using.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 15:47 ` Eli Zaretskii
@ 2020-11-10 16:36 ` Michael Heerdegen
2020-11-10 19:51 ` Jean Louis
1 sibling, 0 replies; 110+ messages in thread
From: Michael Heerdegen @ 2020-11-10 16:36 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, RLAdams, Jean Louis
Eli Zaretskii <eliz@gnu.org> writes:
> If this was due to a WM, are you sure it was Emacs that was eating up
> memory, and not the WM itself? If it was Emacs, then I think the only
> way it could depend on the WM is if the WM feeds Emacs with many X
> events that somehow consume memory.
I'm using openbox here, comparably lightweight as icewm. I don't see an
indication to blame the window manager to be related.
Michael.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-10 15:47 ` Eli Zaretskii
2020-11-10 16:36 ` Michael Heerdegen
@ 2020-11-10 19:51 ` Jean Louis
1 sibling, 0 replies; 110+ messages in thread
From: Jean Louis @ 2020-11-10 19:51 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: michael_heerdegen, 43389, RLAdams
* Eli Zaretskii <eliz@gnu.org> [2020-11-10 18:47]:
> > Date: Tue, 10 Nov 2020 01:33:17 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: Eli Zaretskii <eliz@gnu.org>, 43389@debbugs.gnu.org,
> > Russell Adams <RLAdams@AdamsInfoServ.Com>
> >
> > It was happening regularly under EXWM. Memory get occupied more and
> > more and more until it does not go any more, swapping becomes tedious
> > and computer becomes non-responsive. Then I had to kill it. By using
> > symon-mode I could see swapping of 8 GB and more. My memory is 4 GB
> > plus 8 GB swap currently.
> >
> > This similar condition takes place only after keeping Emacs long in
> > memory like maybe 5-8 hours.
> >
> > After putting laptop to sleep it happens more often.
> >
> > When I changed to IceWM this happened only once.
>
> If this was due to a WM, are you sure it was Emacs that was eating up
> memory, and not the WM itself?
More often I could not do anything. So I have just hard reset computer
without shutdown. For some reason not even the Magic SysRq key was
enabled on Hyperbola GNU/Linux-libre, so I have enabled that one to at
least synchronize disk data and unmount disks before the rest.
How I know it was Emacs? I do not know, I am just assuming. I was
using almost exclusively Emacs and sometimes sxiv image viewer which
exits after viewing and browser. Then I switched to console and tried
killing browser to see if system becomes responsive. Killing any other
program did not make system responsive, so only killing Emacs gave me
back responsiveness. Provided I could switch to console as
responsiveness was terrible. From maybe 20 times I could switch maybe
few times to console to actually get responsiveness.
This happened more than 20 times and I was using symon-mode to monitor
swapping. When I have seen that swapping is few gigabytes for no good
reason I have tried killing everything to understand what is going
on. I've end up killing Emacs and EXWM and restarting X to get into
good shape.
Because it was tedious over weeks not to be able to rely on computer
under EXWM, I have switched to IceWM which is familiar to me. And I
did not encounter anything like that regardless how long Emacs runs.
Now after discussion of other bug where you suggested limiting rss and
after limiting rss I could invoke ./a.out and get prompt, and maybe
that ulimit -m or other tweaking could stop that type of behavior. I
have to look into it.
It could be again that Emacs is not responsible for that but rather
liberal system settings.
> If it was Emacs, then I think the only way it could depend on the WM
> is if the WM feeds Emacs with many X events that somehow consume
> memory.
I was thinking to report to EXWM but I am unsure why it is happening
and cannot easily find out what is really swapping. But because I used
often Emacs exclusively that is how I know that it has to be Emacs
swapping.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-09 20:46 ` Michael Heerdegen
2020-11-09 21:24 ` Michael Heerdegen
2020-11-09 22:33 ` Jean Louis
@ 2020-11-10 3:30 ` Eli Zaretskii
2 siblings, 0 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-10 3:30 UTC (permalink / raw)
To: Michael Heerdegen; +Cc: 43389, RLAdams
> From: Michael Heerdegen <michael_heerdegen@web.de>
> Cc: Russell Adams <RLAdams@AdamsInfoServ.Com>, 43389@debbugs.gnu.org
> Date: Mon, 09 Nov 2020 21:46:11 +0100
>
> Garbage collection stats:
> ((conses 16 2645730 3784206) (symbols 48 68678 724) (strings 32 528858 451889) (string-bytes 1 18127696) (vectors 16 213184) (vector-slots 8 3704641 2189052) (floats 8 2842 5514) (intervals 56 264780 87057) (buffers 992 119))
>
> => 40.4MB (+ 57.7MB dead) in conses
> 3.14MB (+ 33.9kB dead) in symbols
> 16.1MB (+ 13.8MB dead) in strings
> 17.3MB in string-bytes
> 3.25MB in vectors
> 28.3MB (+ 16.7MB dead) in vector-slots
> 22.2kB (+ 43.1kB dead) in floats
> 14.1MB (+ 4.65MB dead) in intervals
> 115kB in buffers
>
> Total in lisp objects: 216MB (live 123MB, dead 93.0MB)
>
> Buffer ralloc memory usage:
> 119 buffers
> 16.1MB total (1.71MB in gaps)
Once again, the memory managed by GC doesn't explain the overall
footprint.
> Anything I can do to find out more?
If you have some tool that can produce a detailed memory map, stating
which part and which library uses what memory, please do. Otherwise,
the most important thing is to try to describe what you did from the
beginning of the session, including the files you visited and other
features/commands you invoked that could at some point consume memory.
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-17 20:47 ` Russell Adams
2020-09-17 21:58 ` Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors
2020-09-18 8:22 ` Eli Zaretskii
@ 2020-11-26 15:42 ` Russell Adams
2020-11-26 16:34 ` Eli Zaretskii
2 siblings, 1 reply; 110+ messages in thread
From: Russell Adams @ 2020-11-26 15:42 UTC (permalink / raw)
To: 43389
On Thu, Sep 17, 2020 at 10:47:04PM +0200, Russell Adams wrote:
> From Emacs memory-usage package:
>
> Garbage collection stats:
> ((conses 16 1912248 251798) (symbols 48 54872 19) (strings 32 327552 81803) (string-bytes 1 12344346) (vectors 16 158994) (vector-slots 8 2973919 339416) (floats 8 992 4604) (intervals 56 182607 7492) (buffers 1000 195))
>
> => 29.2MB (+ 3.84MB dead) in conses
> 2.51MB (+ 0.89kB dead) in symbols
> 10.00MB (+ 2.50MB dead) in strings
> 11.8MB in string-bytes
> 2.43MB in vectors
> 22.7MB (+ 2.59MB dead) in vector-slots
> 7.75kB (+ 36.0kB dead) in floats
> 9.75MB (+ 410kB dead) in intervals
> 190kB in buffers
>
> Total in lisp objects: 97.9MB (live 88.5MB, dead 9.36MB)
I had the memory leak occur again and this time I had the
glibc-malloc-trace-utils loaded and running from the start.
So my emacs grew to 8GB in RAM, and what was curious is if it was a
background task (not window focused on an emacsclient), then the
memory stayed the same. When I had the window focused, I could watch
the memory constantly increasing in htop a few megs at a time.
Garbage collection stats:
((conses 16 1749077 1176908)
(symbols 48 47530 38)
(strings 32 307123 144020)
(string-bytes 1 10062511)
(vectors 16 113172)
(vector-slots 8 2105205 486800)
(floats 8 709 1719)
(intervals 56 174593 44804)
(buffers 1000 71))
=> 26.7MB (+ 18.0MB dead) in conses
2.18MB (+ 1.78kB dead) in symbols
9.37MB (+ 4.40MB dead) in strings
9.60MB in string-bytes
1.73MB in vectors
16.1MB (+ 3.71MB dead) in vector-slots
5.54kB (+ 13.4kB dead) in floats
9.32MB (+ 2.39MB dead) in intervals
69.3kB in buffers
Total in lisp objects: 103MB (live 75.0MB, dead 28.5MB)
Buffer ralloc memory usage:
47 buffers
3.36MB total ( 232kB in gaps)
Size Gap Name
926626 1504 AIS.org
690050 1933 Personal.org
553850 2000 Abuffer.org
490398 3851 *Packages*
215653 2000 KB.org
76686 1708 X230.org
59841 2123 Agenda.org
51375 51076 *sly-events for sbcl*
51060 1902 ASC.org
44596 2000 Contacts.org
36825 1792 *Messages*
23882 2309 *org-caldav-debug*
22867 2000 rgb.lisp
14678 746 *sly-mrepl for sbcl*
6640 1173 VirtualFCMap.lisp
4096 2000 *code-converting-work*
3409 16717 *http orgmode.org:443*
1946 104 *Org Agenda*
1528 2028 *http gaming.demosthenes.org*-491231
1524 2028 *http gaming.demosthenes.org*-15349
1518 2028 *http gaming.demosthenes.org*
1276 1368 *sly-inferior-lisp for sbcl*
1231 2026 *http gaming.demosthenes.org*-464306
1208 825 *Help*
679 1574 *Buffer Details*
641 1975 *Agenda Commands*
531 1494 *Calendar*
324 2008 *http melpa.org:443*
278 3775 *helm M-x*
185 1838 *org caldav sync result*
144 2000 *scratch*
57 21434 *helm find files*
44 5610 *icalendar-work*
30 2000 *sly-fontify*
21 2000 *log-edit-files*
20 0 *pdf-info-query--escape*
18 4077 *helm mini*
12 8630 *code-conversion-work*
5 4065 *Echo Area 1*
0 2033 *Minibuf-1*
0 20 *Minibuf-0*
0 20 *server*
0 4060 *Echo Area 0*
0 61547 *sly-1*
0 20 *sly-dds-1-1*
0 20 *changes to ~/ASC/Software/Snaps/*
0 20 *vc*
I started emacs with:
MTRACE_CTL_FILE=mtraceEMACS.mtr LD_PRELOAD=~/software/glibc-malloc-trace-utils/libmtrace.so ~/.local/bin/emacs --daemon >> ~/.config/emacs/emacs.log 2>&1
This created some huge files. By the time I reached 8GB in RAM, the
mtr file for the main process (I think) was 53 GB. I also have little mtrace
files littered everywhere in different project directories.
-rw-r--r-- 1 adamsrl adamsrl 53G Nov 26 13:23 mtraceEMACS.mtr.15236
-rw-r--r-- 1 adamsrl adamsrl 4.2G Nov 26 13:36 my.wl
-rw-r--r-- 1 adamsrl adamsrl 1.3G Nov 26 13:50 mtraceEMACS.mtr.15236.allocs
-rw-r--r-- 1 adamsrl adamsrl 32K Nov 26 13:55 mtraceEMACS.mtr.15236.binnedallocs.log
-rw-r--r-- 1 adamsrl adamsrl 6.0G Nov 26 15:12 vmrssout
-rw-r--r-- 1 adamsrl adamsrl 6.0G Nov 26 15:12 vmout
-rw-r--r-- 1 adamsrl adamsrl 8.6G Nov 26 15:12 idealrssout
I converted the mtraceEMACS.mtr.15236 to my.wl using trace2wl.
The trace_run command did this output:
% ~/software/glibc-malloc-trace-utils/trace_run ./my.wl vmout vmrssout idealrssout
11,757,635,230,744 cycles
4,532,472,554 usec wall time
5,966,752,470 usec across 3 threads
8,461,721,600 bytes Max RSS (218,308,608 -> 8,680,030,208)
Starting VmRSS 218308608 (bytes)
Starting VmSize 219549696 (bytes)
Starting MaxRSS 218308608 (bytes)
Ending VmRSS 8680030208 (bytes)
Ending VmSize 8903626752 (bytes)
Ending MaxRSS 8680030208 (bytes)
8,131,008 Kb Max Ideal RSS
sizeof ticks_t is 8
Avg malloc time: 145 in 422,186,832 calls
Avg calloc time: 12,538 in 1,164,584 calls
Avg realloc time: 566 in 3,294,165 calls
Avg free time: 110 in 449,397,629 calls
Total call time: 127,318,389,383 cycles
These files are impossible to share around, is there anything I can
run to extract anything else useful from them?
% ~/software/glibc-malloc-trace-utils/trace_statistics mtraceEMACS.mtr.15236
Min allocation size: 0
Max allocation size: 1603869
Mean allocation size: 128
I did follow the instructions for downsampling, but I haven't a clue
what to do in Octave. Is it worth posting those files?
I have the impression this is more about how often more RAM was
requested, and not the source of the call?
I should mention I'm present in #emacs and happy to discuss there.
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-26 15:42 ` Russell Adams
@ 2020-11-26 16:34 ` Eli Zaretskii
2020-11-26 16:54 ` Russell Adams
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-26 16:34 UTC (permalink / raw)
To: Russell Adams; +Cc: 43389
> Date: Thu, 26 Nov 2020 16:42:19 +0100
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> So my emacs grew to 8GB in RAM, and what was curious is if it was a
> background task (not window focused on an emacsclient), then the
> memory stayed the same. When I had the window focused, I could watch
> the memory constantly increasing in htop a few megs at a time.
Was the memory increasing even when you did nothing in the session?
If so, do you have some background functions running, e.g. timers? If
Emacs was not idle, can you describe what you were doing at that time?
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-26 16:34 ` Eli Zaretskii
@ 2020-11-26 16:54 ` Russell Adams
2020-11-26 19:20 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Russell Adams @ 2020-11-26 16:54 UTC (permalink / raw)
To: 43389
On Thu, Nov 26, 2020 at 06:34:31PM +0200, Eli Zaretskii wrote:
> > Date: Thu, 26 Nov 2020 16:42:19 +0100
> > From: Russell Adams <RLAdams@AdamsInfoServ.Com>
> >
> > So my emacs grew to 8GB in RAM, and what was curious is if it was a
> > background task (not window focused on an emacsclient), then the
> > memory stayed the same. When I had the window focused, I could watch
> > the memory constantly increasing in htop a few megs at a time.
>
> Was the memory increasing even when you did nothing in the session?
> If so, do you have some background functions running, e.g. timers? If
> Emacs was not idle, can you describe what you were doing at that time?
At one point I was watching htop and every time I switched to the
Emacs window and returned to htop, I'd see it grow by several more MB
over 3-5 seconds and then stop. So I left Emacs as the focused window
overnight, and it grew from 4GB to 8GB.
In this instance, I had my cursor at the bottom of a saved Org file. I
wasn't even actively typing or interacting with Emacs. I just grew
each time it got window focus.
Yes I have a few timers, but those trip at midnight. I call org-agenda
and org-caldev-sync. I don't have any other timers that I know of.
Mind you I'm running daemon mode and I'm looking at an emacsclient
frame.
Thanks.
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-26 16:54 ` Russell Adams
@ 2020-11-26 19:20 ` Eli Zaretskii
2020-11-27 10:45 ` Russell Adams
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-26 19:20 UTC (permalink / raw)
To: Russell Adams; +Cc: 43389
> Date: Thu, 26 Nov 2020 17:54:36 +0100
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> At one point I was watching htop and every time I switched to the
> Emacs window and returned to htop, I'd see it grow by several more MB
> over 3-5 seconds and then stop. So I left Emacs as the focused window
> overnight, and it grew from 4GB to 8GB.
>
> In this instance, I had my cursor at the bottom of a saved Org file. I
> wasn't even actively typing or interacting with Emacs. I just grew
> each time it got window focus.
OK, so an idling Emacs with one focused frame gains about 0.5GB every
hour, would that be more or less accurate?
> Yes I have a few timers, but those trip at midnight. I call org-agenda
> and org-caldev-sync. I don't have any other timers that I know of.
Just so we have the hard evidence: could you please show the values of
timer-list and timer-idle-list on that system?
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-26 19:20 ` Eli Zaretskii
@ 2020-11-27 10:45 ` Russell Adams
2020-11-27 12:38 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Russell Adams @ 2020-11-27 10:45 UTC (permalink / raw)
To: 43389
On Thu, Nov 26, 2020 at 09:20:42PM +0200, Eli Zaretskii wrote:
> > Date: Thu, 26 Nov 2020 17:54:36 +0100
> > From: Russell Adams <RLAdams@AdamsInfoServ.Com>
> >
> > At one point I was watching htop and every time I switched to the
> > Emacs window and returned to htop, I'd see it grow by several more MB
> > over 3-5 seconds and then stop. So I left Emacs as the focused window
> > overnight, and it grew from 4GB to 8GB.
> >
> > In this instance, I had my cursor at the bottom of a saved Org file. I
> > wasn't even actively typing or interacting with Emacs. I just grew
> > each time it got window focus.
>
> OK, so an idling Emacs with one focused frame gains about 0.5GB every
> hour, would that be more or less accurate?
>
> > Yes I have a few timers, but those trip at midnight. I call org-agenda
> > and org-caldev-sync. I don't have any other timers that I know of.
>
> Just so we have the hard evidence: could you please show the values of
> timer-list and timer-idle-list on that system?
>
> Thanks.
>
3.15 1.00 appt-check
8.38 - undo-auto--boundary-timer
117.38 5.00 savehist-autosave
1143.17 60.00 url-cookie-write-file
44223.15 1440.00 org-save-all-org-buffers
44283.15 1440.00 org-agenda-list
44343.15 1440.00 org-caldav-sync
* 0.00 t show-paren-function
* 0.50 t #f(compiled-function () #<bytecode 0x1ffd99dba7bf> [jit-lock--antiblink-grace-timer jit-lock-context-fontify])
* 1.00 - helm-ff--cache-mode-refresh
Unfortunately the Emacs that was 8GB has since been stopped, I killed
it before working with the trace files. My laptop was rebooted later
when the trace statistics utils ate all the RAM (my error, wrong input
file).
This list of timers is from a new instance, but the configuration
hasn't changed.
Are the 50+GB of trace files I have of any value?
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-27 10:45 ` Russell Adams
@ 2020-11-27 12:38 ` Eli Zaretskii
2020-11-28 19:56 ` Russell Adams
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-27 12:38 UTC (permalink / raw)
To: Russell Adams; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos
> Date: Fri, 27 Nov 2020 11:45:20 +0100
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> > > Yes I have a few timers, but those trip at midnight. I call org-agenda
> > > and org-caldev-sync. I don't have any other timers that I know of.
> >
> > Just so we have the hard evidence: could you please show the values of
> > timer-list and timer-idle-list on that system?
> >
> > Thanks.
> >
>
> 3.15 1.00 appt-check
> 8.38 - undo-auto--boundary-timer
> 117.38 5.00 savehist-autosave
> 1143.17 60.00 url-cookie-write-file
> 44223.15 1440.00 org-save-all-org-buffers
> 44283.15 1440.00 org-agenda-list
> 44343.15 1440.00 org-caldav-sync
> * 0.00 t show-paren-function
> * 0.50 t #f(compiled-function () #<bytecode 0x1ffd99dba7bf> [jit-lock--antiblink-grace-timer jit-lock-context-fontify])
> * 1.00 - helm-ff--cache-mode-refresh
Thanks.
> Unfortunately the Emacs that was 8GB has since been stopped, I killed
> it before working with the trace files. My laptop was rebooted later
> when the trace statistics utils ate all the RAM (my error, wrong input
> file).
>
> This list of timers is from a new instance, but the configuration
> hasn't changed.
>
> Are the 50+GB of trace files I have of any value?
I don't think Carlos and others saw your reports, because they were
not CC'ed. I'm CC'ing them now; please make sure to reply to all of
them next time.
Carlos, please read
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#554
for the details posted by Russel about his data points. If you can
instruct him how to produce some analysis from the mtrace files, or
how to make them available for your analysis, please do.
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-27 12:38 ` Eli Zaretskii
@ 2020-11-28 19:56 ` Russell Adams
2020-11-28 20:13 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Russell Adams @ 2020-11-28 19:56 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos
On Fri, Nov 27, 2020 at 02:38:07PM +0200, Eli Zaretskii wrote:
> > Unfortunately the Emacs that was 8GB has since been stopped, I killed
> > it before working with the trace files. My laptop was rebooted later
> > when the trace statistics utils ate all the RAM (my error, wrong input
> > file).
> >
> > This list of timers is from a new instance, but the configuration
> > hasn't changed.
> >
> > Are the 50+GB of trace files I have of any value?
>
> I don't think Carlos and others saw your reports, because they were
> not CC'ed. I'm CC'ing them now; please make sure to reply to all of
> them next time.
>
> Carlos, please read
>
> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#554
>
> for the details posted by Russel about his data points. If you can
> instruct him how to produce some analysis from the mtrace files, or
> how to make them available for your analysis, please do.
I find particularly of interest the growth of Emacs processes while
idle.
Yesterday I restarted Emacs and over the next 18 hours I left my
laptop idle with Emacs as the focused application. My Emacs has grown
to 3GB and every time I select my Emacs window it will grow by a few
MB while I watch in htop.
I will restart it again tonight and leave it focused, and see if I can
reproduce the growth. It also appears that the growth is not linear,
slower at first and hard to see, but in the multiple MB at a time
later when the total is in GB.
Again I use emacs in daemon mode with one or more emacsclient
processes connected (x11 and terminal). I use StumpWM in full screen
mode with my emacsclient, and if it's focused it seems the growth
continues despite xscreensaver coming on and dimming the screen.
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-28 19:56 ` Russell Adams
@ 2020-11-28 20:13 ` Eli Zaretskii
2020-11-28 21:52 ` Basil L. Contovounesios
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-28 20:13 UTC (permalink / raw)
To: Russell Adams; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos
> Date: Sat, 28 Nov 2020 20:56:31 +0100
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
> Cc: dj@redhat.com, fweimer@redhat.com, trevor@trevorbentley.com,
> michael_heerdegen@web.de, carlos@redhat.com, 43389@debbugs.gnu.org
>
> I find particularly of interest the growth of Emacs processes while
> idle.
>
> Yesterday I restarted Emacs and over the next 18 hours I left my
> laptop idle with Emacs as the focused application. My Emacs has grown
> to 3GB and every time I select my Emacs window it will grow by a few
> MB while I watch in htop.
Is there any way to get a trace/record of X events that are delivered
to Emacs during this kind of idleness? Those events and the timers
are, I think, the only things that are going inside such an idle
session.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-28 20:13 ` Eli Zaretskii
@ 2020-11-28 21:52 ` Basil L. Contovounesios
2020-11-29 3:29 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Basil L. Contovounesios @ 2020-11-28 21:52 UTC (permalink / raw)
To: Eli Zaretskii
Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos,
Russell Adams
Eli Zaretskii <eliz@gnu.org> writes:
> Is there any way to get a trace/record of X events that are delivered
> to Emacs during this kind of idleness? Those events and the timers
> are, I think, the only things that are going inside such an idle
> session.
What about asynchronous processes, such as url.el retrievals?
(Though I guess those would be accounted for in buffer/GC lists.)
--
Basil
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-28 21:52 ` Basil L. Contovounesios
@ 2020-11-29 3:29 ` Eli Zaretskii
0 siblings, 0 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-29 3:29 UTC (permalink / raw)
To: Basil L. Contovounesios
Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos, RLAdams
> From: "Basil L. Contovounesios" <contovob@tcd.ie>
> Cc: Russell Adams <RLAdams@AdamsInfoServ.Com>, fweimer@redhat.com,
> 43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de,
> trevor@trevorbentley.com, carlos@redhat.com
> Date: Sat, 28 Nov 2020 21:52:42 +0000
>
> Eli Zaretskii <eliz@gnu.org> writes:
>
> > Is there any way to get a trace/record of X events that are delivered
> > to Emacs during this kind of idleness? Those events and the timers
> > are, I think, the only things that are going inside such an idle
> > session.
>
> What about asynchronous processes, such as url.el retrievals?
Those should not depend on whether the session is GUI or TTY, nor on
whether an Emacs frame has focus.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-14 0:43 bug#43389: 28.0.50; Emacs memory leaks Michael Heerdegen
2020-09-14 19:09 ` Juri Linkov
@ 2020-09-17 20:59 ` Thomas Ingram
2020-10-29 20:17 ` Trevor Bentley
` (2 subsequent siblings)
4 siblings, 0 replies; 110+ messages in thread
From: Thomas Ingram @ 2020-09-17 20:59 UTC (permalink / raw)
To: 43389
Hello.
I experienced something similar today, I noticed Emacs was using 3.6GB
of memory under light org
mode usage (dozen buffers, all files smaller half a MB). I had to close
Emacs as my computer was
locking up, but here is my emacs-report-bug output with roughly the same
workload open.
I'll try to gather more information next time I notice unusual memory usage.
Thanks.
In GNU Emacs 27.1 (build 1, x86_64-redhat-linux-gnu, GTK+ Version
3.24.21, cairo version 1.16.0)
of 2020-08-20 built on buildvm-x86-24.iad2.fedoraproject.org
Windowing system distributor 'Fedora Project', version 11.0.12008000
System Description: Fedora 32 (Workstation Edition)
Recent messages:
org-babel-exp process emacs-lisp at position 9286...
org-babel-exp process nil at position 9867...
org-babel-exp process make at position 10150...
Setting up indent for shell type bash
Indentation variables are now local.
Indentation setup for shell type bash
Saving file
/home/thomas/Documents/taingram.org/html/blog/org-mode-blog.html...
Wrote /home/thomas/Documents/taingram.org/html/blog/org-mode-blog.html
Mark saved where search started
Making completion list...
Configured using:
'configure --build=x86_64-redhat-linux-gnu
--host=x86_64-redhat-linux-gnu --program-prefix=
--disable-dependency-tracking --prefix=/usr --exec-prefix=/usr
--bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
--datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
--libexecdir=/usr/libexec --localstatedir=/var
--sharedstatedir=/var/lib --mandir=/usr/share/man
--infodir=/usr/share/info --with-dbus --with-gif --with-jpeg --with-png
--with-rsvg --with-tiff --with-xft --with-xpm --with-x-toolkit=gtk3
--with-gpm=no --with-xwidgets --with-modules --with-harfbuzz
--with-cairo --with-json build_alias=x86_64-redhat-linux-gnu
host_alias=x86_64-redhat-linux-gnu 'CFLAGS=-DMAIL_USE_LOCKF -O2 -g
-pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2
-Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong
-grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection'
LDFLAGS=-Wl,-z,relro
PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
Configured features:
XPM JPEG TIFF GIF PNG RSVG CAIRO SOUND DBUS GSETTINGS GLIB NOTIFY
INOTIFY ACL LIBSELINUX GNUTLS LIBXML2 FREETYPE HARFBUZZ M17N_FLT LIBOTF
ZLIB TOOLKIT_SCROLL_BARS GTK3 X11 XDBE XIM MODULES THREADS XWIDGETS
LIBSYSTEMD JSON PDUMPER GMP
Important settings:
value of $LANG: en_US.UTF-8
value of $XMODIFIERS: @im=ibus
locale-coding-system: utf-8-unix
Major mode: Org
Minor modes in effect:
flyspell-mode: t
shell-dirtrack-mode: t
global-company-mode: t
company-mode: t
override-global-mode: t
recentf-mode: t
tooltip-mode: t
global-eldoc-mode: t
electric-indent-mode: t
mouse-wheel-mode: t
menu-bar-mode: t
file-name-shadow-mode: t
global-font-lock-mode: t
font-lock-mode: t
blink-cursor-mode: t
auto-composition-mode: t
auto-encryption-mode: t
auto-compression-mode: t
column-number-mode: t
line-number-mode: t
auto-fill-function: org-auto-fill-function
transient-mark-mode: t
Load-path shadows:
/home/thomas/.config/emacs/elpa/xref-1.0.3/xref hides
/usr/share/emacs/27.1/lisp/progmodes/xref
/home/thomas/.config/emacs/elpa/flymake-1.0.9/flymake hides
/usr/share/emacs/27.1/lisp/progmodes/flymake
/home/thomas/.config/emacs/elpa/project-0.5.2/project hides
/usr/share/emacs/27.1/lisp/progmodes/project
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-screen
hides /usr/share/emacs/27.1/lisp/org/ob-screen
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-table
hides /usr/share/emacs/27.1/lisp/org/org-table
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lisp hides
/usr/share/emacs/27.1/lisp/org/ob-lisp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-core hides
/usr/share/emacs/27.1/lisp/org/ob-core
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-md hides
/usr/share/emacs/27.1/lisp/org/ox-md
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-calc hides
/usr/share/emacs/27.1/lisp/org/ob-calc
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-crypt
hides /usr/share/emacs/27.1/lisp/org/org-crypt
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-plot hides
/usr/share/emacs/27.1/lisp/org/org-plot
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-awk hides
/usr/share/emacs/27.1/lisp/org/ob-awk
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-perl hides
/usr/share/emacs/27.1/lisp/org/ob-perl
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-org hides
/usr/share/emacs/27.1/lisp/org/ox-org
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-odt hides
/usr/share/emacs/27.1/lisp/org/ox-odt
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ebnf hides
/usr/share/emacs/27.1/lisp/org/ob-ebnf
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ditaa hides
/usr/share/emacs/27.1/lisp/org/ob-ditaa
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ocaml hides
/usr/share/emacs/27.1/lisp/org/ob-ocaml
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-install
hides /usr/share/emacs/27.1/lisp/org/org-install
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sql hides
/usr/share/emacs/27.1/lisp/org/ob-sql
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-js hides
/usr/share/emacs/27.1/lisp/org/ob-js
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-org hides
/usr/share/emacs/27.1/lisp/org/ob-org
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-pcomplete
hides /usr/share/emacs/27.1/lisp/org/org-pcomplete
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-exp hides
/usr/share/emacs/27.1/lisp/org/ob-exp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-src hides
/usr/share/emacs/27.1/lisp/org/org-src
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-java hides
/usr/share/emacs/27.1/lisp/org/ob-java
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-stan hides
/usr/share/emacs/27.1/lisp/org/ob-stan
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-mscgen
hides /usr/share/emacs/27.1/lisp/org/ob-mscgen
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-gnus hides
/usr/share/emacs/27.1/lisp/org/ol-gnus
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-shell hides
/usr/share/emacs/27.1/lisp/org/ob-shell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-matlab
hides /usr/share/emacs/27.1/lisp/org/ob-matlab
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lilypond
hides /usr/share/emacs/27.1/lisp/org/ob-lilypond
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-bibtex
hides /usr/share/emacs/27.1/lisp/org/ol-bibtex
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-num hides
/usr/share/emacs/27.1/lisp/org/org-num
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-coq hides
/usr/share/emacs/27.1/lisp/org/ob-coq
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ruby hides
/usr/share/emacs/27.1/lisp/org/ob-ruby
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-compat
hides /usr/share/emacs/27.1/lisp/org/org-compat
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-J hides
/usr/share/emacs/27.1/lisp/org/ob-J
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-ctags
hides /usr/share/emacs/27.1/lisp/org/org-ctags
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-goto hides
/usr/share/emacs/27.1/lisp/org/org-goto
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-archive
hides /usr/share/emacs/27.1/lisp/org/org-archive
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-clojure
hides /usr/share/emacs/27.1/lisp/org/ob-clojure
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-macro
hides /usr/share/emacs/27.1/lisp/org/org-macro
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-picolisp
hides /usr/share/emacs/27.1/lisp/org/ob-picolisp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-haskell
hides /usr/share/emacs/27.1/lisp/org/ob-haskell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-attach-git
hides /usr/share/emacs/27.1/lisp/org/org-attach-git
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-agenda
hides /usr/share/emacs/27.1/lisp/org/org-agenda
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-tempo
hides /usr/share/emacs/27.1/lisp/org/org-tempo
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-inlinetask
hides /usr/share/emacs/27.1/lisp/org/org-inlinetask
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-forth hides
/usr/share/emacs/27.1/lisp/org/ob-forth
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-latex hides
/usr/share/emacs/27.1/lisp/org/ox-latex
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-vala hides
/usr/share/emacs/27.1/lisp/org/ob-vala
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-protocol
hides /usr/share/emacs/27.1/lisp/org/org-protocol
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol hides
/usr/share/emacs/27.1/lisp/org/ol
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-emacs-lisp
hides /usr/share/emacs/27.1/lisp/org/ob-emacs-lisp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-icalendar
hides /usr/share/emacs/27.1/lisp/org/ox-icalendar
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-element
hides /usr/share/emacs/27.1/lisp/org/org-element
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-texinfo
hides /usr/share/emacs/27.1/lisp/org/ox-texinfo
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-tangle
hides /usr/share/emacs/27.1/lisp/org/ob-tangle
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-fortran
hides /usr/share/emacs/27.1/lisp/org/ob-fortran
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ledger
hides /usr/share/emacs/27.1/lisp/org/ob-ledger
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-eww hides
/usr/share/emacs/27.1/lisp/org/ol-eww
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sqlite
hides /usr/share/emacs/27.1/lisp/org/ob-sqlite
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-publish
hides /usr/share/emacs/27.1/lisp/org/ox-publish
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-C hides
/usr/share/emacs/27.1/lisp/org/ob-C
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-octave
hides /usr/share/emacs/27.1/lisp/org/ob-octave
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-attach
hides /usr/share/emacs/27.1/lisp/org/org-attach
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-hledger
hides /usr/share/emacs/27.1/lisp/org/ob-hledger
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-entities
hides /usr/share/emacs/27.1/lisp/org/org-entities
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox hides
/usr/share/emacs/27.1/lisp/org/ox
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-mobile
hides /usr/share/emacs/27.1/lisp/org/org-mobile
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-indent
hides /usr/share/emacs/27.1/lisp/org/org-indent
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-list hides
/usr/share/emacs/27.1/lisp/org/org-list
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-keys hides
/usr/share/emacs/27.1/lisp/org/org-keys
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lob hides
/usr/share/emacs/27.1/lisp/org/ob-lob
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-rmail hides
/usr/share/emacs/27.1/lisp/org/ol-rmail
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-macs hides
/usr/share/emacs/27.1/lisp/org/org-macs
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-w3m hides
/usr/share/emacs/27.1/lisp/org/ol-w3m
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-mhe hides
/usr/share/emacs/27.1/lisp/org/ol-mhe
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-maxima
hides /usr/share/emacs/27.1/lisp/org/ob-maxima
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lua hides
/usr/share/emacs/27.1/lisp/org/ob-lua
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-css hides
/usr/share/emacs/27.1/lisp/org/ob-css
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-lint hides
/usr/share/emacs/27.1/lisp/org/org-lint
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-irc hides
/usr/share/emacs/27.1/lisp/org/ol-irc
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org hides
/usr/share/emacs/27.1/lisp/org/org
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-shen hides
/usr/share/emacs/27.1/lisp/org/ob-shen
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-bbdb hides
/usr/share/emacs/27.1/lisp/org/ol-bbdb
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-datetree
hides /usr/share/emacs/27.1/lisp/org/org-datetree
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-plantuml
hides /usr/share/emacs/27.1/lisp/org/ob-plantuml
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-feed hides
/usr/share/emacs/27.1/lisp/org/org-feed
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-capture
hides /usr/share/emacs/27.1/lisp/org/org-capture
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-habit
hides /usr/share/emacs/27.1/lisp/org/org-habit
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sass hides
/usr/share/emacs/27.1/lisp/org/ob-sass
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-footnote
hides /usr/share/emacs/27.1/lisp/org/org-footnote
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-timer
hides /usr/share/emacs/27.1/lisp/org/org-timer
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-duration
hides /usr/share/emacs/27.1/lisp/org/org-duration
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-R hides
/usr/share/emacs/27.1/lisp/org/ob-R
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-faces
hides /usr/share/emacs/27.1/lisp/org/org-faces
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-comint
hides /usr/share/emacs/27.1/lisp/org/ob-comint
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-docview
hides /usr/share/emacs/27.1/lisp/org/ol-docview
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-man hides
/usr/share/emacs/27.1/lisp/org/ox-man
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-ascii hides
/usr/share/emacs/27.1/lisp/org/ox-ascii
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-eval hides
/usr/share/emacs/27.1/lisp/org/ob-eval
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-version
hides /usr/share/emacs/27.1/lisp/org/org-version
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob hides
/usr/share/emacs/27.1/lisp/org/ob
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-table hides
/usr/share/emacs/27.1/lisp/org/ob-table
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-colview
hides /usr/share/emacs/27.1/lisp/org/org-colview
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-clock
hides /usr/share/emacs/27.1/lisp/org/org-clock
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-eshell
hides /usr/share/emacs/27.1/lisp/org/ob-eshell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sed hides
/usr/share/emacs/27.1/lisp/org/ob-sed
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ref hides
/usr/share/emacs/27.1/lisp/org/ob-ref
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-io hides
/usr/share/emacs/27.1/lisp/org/ob-io
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-html hides
/usr/share/emacs/27.1/lisp/org/ox-html
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-abc hides
/usr/share/emacs/27.1/lisp/org/ob-abc
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-id hides
/usr/share/emacs/27.1/lisp/org/org-id
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-asymptote
hides /usr/share/emacs/27.1/lisp/org/ob-asymptote
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-scheme
hides /usr/share/emacs/27.1/lisp/org/ob-scheme
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-python
hides /usr/share/emacs/27.1/lisp/org/ob-python
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-info hides
/usr/share/emacs/27.1/lisp/org/ol-info
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-groovy
hides /usr/share/emacs/27.1/lisp/org/ob-groovy
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-latex hides
/usr/share/emacs/27.1/lisp/org/ob-latex
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-dot hides
/usr/share/emacs/27.1/lisp/org/ob-dot
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-loaddefs
hides /usr/share/emacs/27.1/lisp/org/org-loaddefs
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-beamer
hides /usr/share/emacs/27.1/lisp/org/ox-beamer
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-mouse
hides /usr/share/emacs/27.1/lisp/org/org-mouse
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-eshell
hides /usr/share/emacs/27.1/lisp/org/ol-eshell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-processing
hides /usr/share/emacs/27.1/lisp/org/ob-processing
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-gnuplot
hides /usr/share/emacs/27.1/lisp/org/ob-gnuplot
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-makefile
hides /usr/share/emacs/27.1/lisp/org/ob-makefile
/home/thomas/.config/emacs/elpa/eldoc-1.10.0/eldoc hides
/usr/share/emacs/27.1/lisp/emacs-lisp/eldoc
Features:
(misearch multi-isearch mhtml-mode css-mode eww mm-url url-queue color
js cc-mode cc-fonts cc-guess cc-menus cc-cmds cc-styles cc-align
cc-engine cc-vars cc-defs sgml-mode url-http url-auth url-gw nsm
sh-script smie executable htmlize mule-util ibuf-ext ibuffer
ibuffer-loaddefs pp shadow sort mail-extr eieio-opt speedbar sb-image
ezimage dframe help-fns radix-tree emacsbug sendmail imenu man go-mode
find-file ffap rx vc-git diff-mode org-eldoc flyspell ispell ol-eww
ol-rmail ol-mhe ol-irc ol-info ol-gnus nnir gnus-sum url url-proxy
url-privacy url-expand url-methods url-history mailcap shr url-cookie
url-domsuf url-util svg dom gnus-group gnus-undo gnus-start gnus-cloud
nnimap nnmail mail-source utf7 netrc nnoo parse-time iso8601 gnus-spec
gnus-int gnus-range message rmc puny rfc822 mml mml-sec epa derived epg
epg-config mm-decode mm-bodies mm-encode mail-parse rfc2231 mailabbrev
gmm-utils mailheader gnus-win gnus nnheader gnus-util rmail
rmail-loaddefs rfc2047 rfc2045 ietf-drums text-property-search
mail-utils mm-util mail-prsvr ol-docview doc-view jka-compr image-mode
exif ol-bibtex bibtex ol-bbdb ol-w3m org-tempo tempo ox-odt rng-loc
rng-uri rng-parse rng-match rng-dt rng-util rng-pttrn nxml-parse nxml-ns
nxml-enc xmltok nxml-util ox-latex ox-icalendar ox-html table ox-ascii
ox-publish ox org-element avl-tree ob-latex ob-shell shell org ob
ob-tangle ob-ref ob-lob ob-table ob-exp org-macro org-footnote org-src
ob-comint org-pcomplete pcomplete org-list org-faces org-entities
noutline outline org-version ob-emacs-lisp ob-core ob-eval org-table ol
org-keys org-compat advice org-macs org-loaddefs format-spec find-func
cal-menu calendar cal-loaddefs dired dired-loaddefs time-date checkdoc
lisp-mnt flymake-proc flymake compile comint ansi-color warnings
thingatpt modus-operandi-theme company-oddmuse company-keywords
company-etags etags fileloop generator xref project ring company-gtags
company-dabbrev-code company-dabbrev company-files company-clang
company-capf company-cmake company-semantic company-template
company-bbdb company pcase delight cl-extra help-mode use-package
use-package-ensure use-package-delight use-package-diminish
use-package-bind-key bind-key easy-mmode use-package-core finder-inf
edmacro kmacro recentf tree-widget wid-edit clang-rename
clang-include-fixer let-alist clang-format xml info package easymenu
browse-url url-handlers url-parse auth-source cl-seq eieio eieio-core
cl-macs eieio-loaddefs password-cache json subr-x map url-vars seq
byte-opt gv bytecomp byte-compile cconv cl-loaddefs cl-lib tooltip eldoc
electric uniquify ediff-hook vc-hooks lisp-float-type mwheel term/x-win
x-win term/common-win x-dnd tool-bar dnd fontset image regexp-opt fringe
tabulated-list replace newcomment text-mode elisp-mode lisp-mode
prog-mode register page tab-bar menu-bar rfn-eshadow isearch timer
select scroll-bar mouse jit-lock font-lock syntax facemenu font-core
term/tty-colors frame minibuffer cl-generic cham georgian utf-8-lang
misc-lang vietnamese tibetan thai tai-viet lao korean japanese eucjp-ms
cp51932 hebrew greek romanian slovak czech european ethiopic indian
cyrillic chinese composite charscript charprop case-table epa-hook
jka-cmpr-hook help simple abbrev obarray cl-preloaded nadvice loaddefs
button faces cus-face macroexp files text-properties overlay sha1 md5
base64 format env code-pages mule custom widget hashtable-print-readable
backquote threads dbusbind inotify dynamic-setting system-font-setting
font-render-setting xwidget-internal cairo move-toolbar gtk x-toolkit x
multi-tty make-network-process emacs)
Memory information:
((conses 16 468606 317258)
(symbols 48 38138 118)
(strings 32 160466 36787)
(string-bytes 1 4836226)
(vectors 16 59254)
(vector-slots 8 1357600 343876)
(floats 8 443 1316)
(intervals 56 2105 1619)
(buffers 1000 37))
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-14 0:43 bug#43389: 28.0.50; Emacs memory leaks Michael Heerdegen
2020-09-14 19:09 ` Juri Linkov
2020-09-17 20:59 ` Thomas Ingram
@ 2020-10-29 20:17 ` Trevor Bentley
2020-10-30 8:00 ` Eli Zaretskii
2020-11-18 21:47 ` Jose A. Ortega Ruiz
2020-12-09 19:41 ` Jose A. Ortega Ruiz
4 siblings, 1 reply; 110+ messages in thread
From: Trevor Bentley @ 2020-10-29 20:17 UTC (permalink / raw)
To: 43389
I'm regularly encountering a bug that might be this one. As with
the previous posters, one of my emacs instances regularly grows up
to 7-10 GB. Garbage collection shows emacs is only aware of
~250MB and has nothing to collect, and /proc/<pid>/smaps shows all
of the usage in the heap.
The only emacs instance that hits this is the one I use the
"emacs-slack" package in, which means long-lived HTTPS
connections. I'm aware that this is a relatively unusual use of
emacs.
It doesn't start leaking until it has been active for 2-3 days.
It might depends on other factors, such as suspending or losing
network connectivity. Once the leak triggers, it grows at a rate
of about 1MB every few seconds. My machine has 32GB, so it gets
pretty far before I notice and kill it. I'm not sure if there is a
limit.
I built emacs with debug symbols and dumped some strace logs last
time it happened. This is from the "native-comp" branch, since
it's the only one I had built with debug symbols: GNU Emacs
28.0.50, commit feed53f8b5da0e58cce412cd41a52883dba6c1be. I see
the same with the version installed from my package manager (Arch,
GNU Emacs 27.1), and the strace log looks about the same, though
without symbols.
I waited until it was actively leaking, and then ran the following
command to print a stack trace whenever the heap is extended with
brk():
$ sudo strace -p $PID -k -r --trace="?brk" --signal="SIGTERM"
The findings: this particular leak is triggered in libgnutls. I
get large batches of the following (truncated) stack trace
--- SNIP ---
> /usr/lib/libc-2.32.so(brk+0xb) [0xf6e7b]
> /usr/lib/libc-2.32.so(__sbrk+0x84) [0xf6f54]
> /usr/lib/libc-2.32.so(__default_morecore+0xd) [0x8d80d]
> /usr/lib/libc-2.32.so(sysmalloc+0x372) [0x890e2]
> /usr/lib/libc-2.32.so(_int_malloc+0xd9e) [0x8ad6e]
> /usr/lib/libc-2.32.so(__libc_malloc+0x1c1) [0x8be51]
> /usr/lib/libgnutls.so.30.28.1(gnutls_session_ticket_send+0x566)
> [0x3cc36]
> /usr/lib/libgnutls.so.30.28.1(gnutls_record_check_corked+0xc0a)
> [0x3e42a]
> /usr/lib/libgnutls.so.30.28.1(gnutls_transport_get_int+0x11b1)
> [0x34d31]
> /usr/lib/libgnutls.so.30.28.1(gnutls_transport_get_int+0x3144)
> [0x36cc4]
> /home/trevor/applications/opt/bin/emacs-28.0.50(emacs_gnutls_read+0x5d)
> [0x2e40a7]
> /home/trevor/applications/opt/bin/emacs-28.0.50(read_process_output+0x28e)
> [0x2def18]
--- SNIP ---
A larger log file is available here:
http://trevorbentley.com/emacs_strace.log
I'm not sure if gnutls is giving back buffers that emacs is
supposed to free, or if the leak is entirely contained within
gnutls, but something in that path is hanging on to a lot of
allocations indefinitely.
Hope this is useful, and let me know if I can provide any other
information that would be helpful.
-Trevor
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-10-29 20:17 ` Trevor Bentley
@ 2020-10-30 8:00 ` Eli Zaretskii
2020-11-11 21:15 ` Trevor Bentley
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-10-30 8:00 UTC (permalink / raw)
To: Trevor Bentley; +Cc: 43389
> From: Trevor Bentley <trevor@trevorbentley.com>
> Date: Thu, 29 Oct 2020 21:17:20 +0100
>
> It doesn't start leaking until it has been active for 2-3 days.
> It might depends on other factors, such as suspending or losing
> network connectivity. Once the leak triggers, it grows at a rate
> of about 1MB every few seconds. My machine has 32GB, so it gets
> pretty far before I notice and kill it. I'm not sure if there is a
> limit.
>
> I built emacs with debug symbols and dumped some strace logs last
> time it happened. This is from the "native-comp" branch, since
> it's the only one I had built with debug symbols: GNU Emacs
> 28.0.50, commit feed53f8b5da0e58cce412cd41a52883dba6c1be. I see
> the same with the version installed from my package manager (Arch,
> GNU Emacs 27.1), and the strace log looks about the same, though
> without symbols.
>
> I waited until it was actively leaking, and then ran the following
> command to print a stack trace whenever the heap is extended with
> brk():
>
> $ sudo strace -p $PID -k -r --trace="?brk" --signal="SIGTERM"
>
> The findings: this particular leak is triggered in libgnutls. I
> get large batches of the following (truncated) stack trace
Thanks. This trace doesn't show how many bytes were allocated, does
it? Without that it is hard to judge whether these GnuTLS calls could
be the culprit. Because the full trace shows other calls to malloc,
for example this:
> /usr/lib/libc-2.32.so(brk+0xb) [0xf6e7b]
> /usr/lib/libc-2.32.so(__sbrk+0x84) [0xf6f54]
> /usr/lib/libc-2.32.so(__default_morecore+0xd) [0x8d80d]
> /usr/lib/libc-2.32.so(sysmalloc+0x372) [0x890e2]
> /usr/lib/libc-2.32.so(_int_malloc+0xd9e) [0x8ad6e]
> /usr/lib/libc-2.32.so(_int_memalign+0x3f) [0x8b01f]
> /usr/lib/libc-2.32.so(_mid_memalign+0x13c) [0x8c12c]
> /home/trevor/applications/opt/bin/emacs-28.0.50(lisp_align_malloc+0x2e) [0x2364ee]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Fcons+0x65) [0x237f74]
> /home/trevor/applications/opt/bin/emacs-28.0.50(store_in_alist+0x5f) [0x5c9a3]
> /home/trevor/applications/opt/bin/emacs-28.0.50(gui_report_frame_params+0x46a) [0x607f1]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Fframe_parameters+0x499) [0x5d88b]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Fframe_parameter+0x381) [0x5dc9c]
> /home/trevor/applications/opt/bin/emacs-28.0.50(eval_sub+0x7a7) [0x26f964]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Fif+0x1f) [0x26b590]
> /home/trevor/applications/opt/bin/emacs-28.0.50(eval_sub+0x38b) [0x26f548]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Feval+0x7a) [0x26ef45]
> /home/trevor/applications/opt/bin/emacs-28.0.50(funcall_subr+0x257) [0x271463]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Ffuncall+0x192) [0x270fe9]
> /home/trevor/applications/opt/bin/emacs-28.0.50(internal_condition_case_n+0xa1) [0x26d81a]
> /home/trevor/applications/opt/bin/emacs-28.0.50(safe__call+0x211) [0x73943]
> /home/trevor/applications/opt/bin/emacs-28.0.50(safe__call1+0xba) [0x73b47]
> /home/trevor/applications/opt/bin/emacs-28.0.50(safe__eval+0x35) [0x73bd7]
> /home/trevor/applications/opt/bin/emacs-28.0.50(display_mode_element+0xe32) [0xb5515]
This seems to indicate some mode-line element that uses :eval, but
without knowing what it does it is hard to say anything more specific.
I also see this:
> /home/trevor/applications/opt/bin/emacs-28.0.50(_start+0x2e) [0x4598e]
2.870962 brk(0x55f5ed9a4000) = 0x55f5ed9a4000
> /usr/lib/libc-2.32.so(brk+0xb) [0xf6e7b]
> /usr/lib/libc-2.32.so(__sbrk+0x84) [0xf6f54]
> /usr/lib/libc-2.32.so(__default_morecore+0xd) [0x8d80d]
> /usr/lib/libc-2.32.so(sysmalloc+0x372) [0x890e2]
> /usr/lib/libc-2.32.so(_int_malloc+0xd9e) [0x8ad6e]
> /usr/lib/libc-2.32.so(_int_memalign+0x3f) [0x8b01f]
> /usr/lib/libc-2.32.so(_mid_memalign+0x13c) [0x8c12c]
> /home/trevor/applications/opt/bin/emacs-28.0.50(lisp_align_malloc+0x2e) [0x2364ee]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Fcons+0x65) [0x237f74]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Fmake_list+0x4f) [0x238544]
> /home/trevor/applications/opt/bin/emacs-28.0.50(concat+0x5c3) [0x2792f6]
> /home/trevor/applications/opt/bin/emacs-28.0.50(Fcopy_sequence+0x16a) [0x278d2a]
> /home/trevor/applications/opt/bin/emacs-28.0.50(timer_check+0x33) [0x1b79dd]
> /home/trevor/applications/opt/bin/emacs-28.0.50(readable_events+0x1a) [0x1b5d00]
> /home/trevor/applications/opt/bin/emacs-28.0.50(get_input_pending+0x2f) [0x1bcf3a]
> /home/trevor/applications/opt/bin/emacs-28.0.50(detect_input_pending_run_timers+0x2e) [0x1c4eb1]
> /home/trevor/applications/opt/bin/emacs-28.0.50(wait_reading_process_output+0x14ec) [0x2de0c0]
> /home/trevor/applications/opt/bin/emacs-28.0.50(sit_for+0x211) [0x53e78]
> /home/trevor/applications/opt/bin/emacs-28.0.50(read_char+0x1019) [0x1b3f62]
This indicates some timer that runs; again, without knowing which
timer and what it does, it is hard to proceed.
Etc. etc. -- the bottom line is that I think we need to know how many
bytes are allocated in each call to make some progress. It would be
even more useful if we could somehow know which of the allocated
buffers are free'd soon and which aren't. That's because Emacs calls
memory allocation functions _a_lot_, and it is completely normal to
see a lot of these calls. What we need is to find allocations that
don't get free'd, and whose byte counts come close to explaining the
rate of 1MB every few seconds. So these calls need to be filtered
somehow, otherwise we will not see the forest for the gazillion trees.
> I'm not sure if gnutls is giving back buffers that emacs is
> supposed to free, or if the leak is entirely contained within
> gnutls, but something in that path is hanging on to a lot of
> allocations indefinitely.
The GnuTLS functions we call in emacs_gnutls_read are:
gnutls_record_recv
emacs_gnutls_handle_error
The latter is only called if there's an error, so I'm guessing it is
not part of your trace. And the former doesn't say in its
documentation that Emacs should free any buffers after calling it, so
I'm not sure how Emacs could be the culprit here. If GnuTLS is the
culprit (and as explained above, this is not certain at this point),
perhaps upgrading to a newer GnuTLS version or reporting this to
GnuTLS developers would allow some progress.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-10-30 8:00 ` Eli Zaretskii
@ 2020-11-11 21:15 ` Trevor Bentley
2020-11-12 14:24 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Trevor Bentley @ 2020-11-11 21:15 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389
> Thanks. This trace doesn't show how many bytes were allocated,
> does it? Without that it is hard to judge whether these GnuTLS
> calls could be the culprit. Because the full trace shows other
> calls to malloc, for example this:
It doesn't show the size of the individual allocations, but it
indirectly shows the size of the heap. Each brk() line like this
one is the start of an entry:
0.000000 brk(0x55f5ed93e000) = 0x55f5ed93e000
Where the first field is relative time since the last brk() call,
and the argument in parentheses is the size requested.
Subtracting the argument to one call from the argument to the
previous call shows how much the heap has been extended. In this
capture, subtracting the first from the last shows that the heap
grew by 8,683,520 bytes, and summing the relative timestamps shows
that this happened in 90.71 seconds. It's growing at about
100KB/sec at this point.
Also, keep in mind that this is brk(). There could have been any
number of malloc() calls in between, zero or millions, but these
are the ones that couldn't find any unused blocks and had to
extend the heap.
> I'm not sure how Emacs could be the culprit here. If GnuTLS is
> the culprit (and as explained above, this is not certain at this
> point), perhaps upgrading to a newer GnuTLS version or reporting
> this to GnuTLS developers would allow some progress.
I think you are right, GnuTLS was probably a symptom, not a cause.
I took a while to respond because I tried running emacs in
Valgrind's Massif heap debugging tool, and it took forever. Some
results are in now, and it looks like GnuTLS wasn't present in the
leak this time around.
First of all, if you aren't familiar with Massif (as I wasn't), it
captures occassional snapshots of the whole heap and all
allocations, and lets you dump a tree-view of those allocations
later with the "ms_print" tool. The timestamps are fairly
useless, as they are in "number of instructions executed." Here
are three files from my investigation:
The raw massif output:
http://trevorbentley.com/massif.out.3364630
The *full* tree output:
http://trevorbentley.com/ms_print.3364630.txt
The tree output showing only entries above 10% usage:
http://trevorbentley.com/ms_print.thresh10.3364630.txt
What you can see from the handy ASCII graph at the top is that
memory usage was chugging along, growing upwards for a couple of
days, and then spiked very quickly up to just over 4GB over a few
hours.
If you scroll down to the very last checkpoint (the 10% threshold
file is better for this), you can see where most of the memory is
used. Very large sums of memory, but from different sources.
1.7GB from lisp_align_malloc (nearly all from Fcons), 1.4GB from
lmalloc (half from allocate_vector_block), 700MB from lrealloc
(mostly from enlarge_buffer_text).
There were no large buffers open, but there were long-lived
network sockets and plenty of timers. I didn't check, but I'd say
the largest buffer was up to a couple of megabytes, since
emacs-slack logs fairly heavily.
I'm not sure what to make of this, really. It seems like a
general, sudden-onset, intense craving for more memory while not
particularly doing much. I could blindly suggest extreme memory
fragmentation problems, but that doesn't seem very likely.
It's trivial to reproduce, but takes 3-5 days, so not exactly
handy to debug. Let me know if you have any requests for the next
iteration before I kill it. It's running in Valgrind again.
Thanks,
-Trevor
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-11 21:15 ` Trevor Bentley
@ 2020-11-12 14:24 ` Eli Zaretskii
2020-11-16 20:16 ` Eli Zaretskii
[not found] ` <87wnyju40z.fsf@mail.trevorbentley.com>
0 siblings, 2 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-12 14:24 UTC (permalink / raw)
To: Trevor Bentley; +Cc: 43389
> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: 43389@debbugs.gnu.org
> Date: Wed, 11 Nov 2020 22:15:21 +0100
>
> The raw massif output:
>
> http://trevorbentley.com/massif.out.3364630
>
> The *full* tree output:
>
> http://trevorbentley.com/ms_print.3364630.txt
>
> The tree output showing only entries above 10% usage:
>
> http://trevorbentley.com/ms_print.thresh10.3364630.txt
>
> What you can see from the handy ASCII graph at the top is that
> memory usage was chugging along, growing upwards for a couple of
> days, and then spiked very quickly up to just over 4GB over a few
> hours.
When this pick happens, I see the following unusual circumstances:
. ImageMagick functions are called and request a lot of (aligned)
memory;
. something called "gomp_thread_start" is called, and also allocates
a lot of memory -- does this mean additional threads start running?
Or am I reading the graphs incorrectly?
Also, I see that you are using the native-compilation branch, and
something called slack-image is being loaded? What is this about?
And can you tell me whether src/config.h defines DOUG_LEA_MALLOC to a
non-zero value on that system?
> If you scroll down to the very last checkpoint (the 10% threshold
> file is better for this), you can see where most of the memory is
> used. Very large sums of memory, but from different sources.
> 1.7GB from lisp_align_malloc (nearly all from Fcons), 1.4GB from
> lmalloc (half from allocate_vector_block), 700MB from lrealloc
> (mostly from enlarge_buffer_text).
>
> There were no large buffers open, but there were long-lived
> network sockets and plenty of timers. I didn't check, but I'd say
> the largest buffer was up to a couple of megabytes, since
> emacs-slack logs fairly heavily.
>
> I'm not sure what to make of this, really. It seems like a
> general, sudden-onset, intense craving for more memory while not
> particularly doing much. I could blindly suggest extreme memory
> fragmentation problems, but that doesn't seem very likely.
It is important to understand what was going one when the memory
started growing fast. You say there were no large buffers, but what
about temporary buffers? what could cause gomp_thread_start, whatever
that is, to start?
We recently added a malloc-info command, maybe you could use it to
show more information about the malloc arenas before and after it
starts to eat up memory.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-12 14:24 ` Eli Zaretskii
@ 2020-11-16 20:16 ` Eli Zaretskii
2020-11-16 20:42 ` Florian Weimer
[not found] ` <87wnyju40z.fsf@mail.trevorbentley.com>
1 sibling, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-16 20:16 UTC (permalink / raw)
To: fweimer, carlos, dj; +Cc: 43389
Bringing on board of this discussion glibc malloc experts: Florian
Weimer, DJ Delorie, and Carlos O'Donell.
For some time (several months, I think) we have reports from Emacs
users that the memory footprints of their Emacs sessions sometimes
start growing very quickly, from several hundreds of MBytes to several
gigabytes in a day or even just few hours, and in some cases causing
the OOMK to kick in and kill the Emacs process. Please refer to the
details described in the discussions of this bug report:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389
and 3 other bugs merged to it, which describe what sounds like the
same problem.
The questions that I'd like to eventually be able to answer are:
. is this indeed due to some malloc'ed chunk that is being used for
prolonged periods of time, and prevents releasing parts of the
heap to the system? IOW, is this pathological, but correct
behavior, or is this some bug?
. if this is correct behavior, can Emacs do something to avoid
triggering it? For example, should we consider tuning glibc's
malloc in some way, by changing the 3 calls to mallopt in
init_alloc_once_for_pdumper?
Your thoughts and help in investigating these problems will be highly
appreciated. Please feel free to ask any questions you come up with,
including about the details of Emacs's memory management and anything
related.
Thanks!
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-16 20:16 ` Eli Zaretskii
@ 2020-11-16 20:42 ` Florian Weimer
2020-11-17 15:45 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Florian Weimer @ 2020-11-16 20:42 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, 43389, dj
* Eli Zaretskii:
> For some time (several months, I think) we have reports from Emacs
> users that the memory footprints of their Emacs sessions sometimes
> start growing very quickly, from several hundreds of MBytes to several
> gigabytes in a day or even just few hours, and in some cases causing
> the OOMK to kick in and kill the Emacs process.
The last time I saw this was a genuine memory leak in the Emacs C code.
Just saying. 8-)
> The questions that I'd like to eventually be able to answer are:
>
> . is this indeed due to some malloc'ed chunk that is being used for
> prolonged periods of time, and prevents releasing parts of the
> heap to the system? IOW, is this pathological, but correct
> behavior, or is this some bug?
>
> . if this is correct behavior, can Emacs do something to avoid
> triggering it? For example, should we consider tuning glibc's
> malloc in some way, by changing the 3 calls to mallopt in
> init_alloc_once_for_pdumper?
>
> Your thoughts and help in investigating these problems will be highly
> appreciated. Please feel free to ask any questions you come up with,
> including about the details of Emacs's memory management and anything
> related.
There is an issue with reusing posix_memalign allocations. On my system
(running Emacs 27.1 as supplied by Fedora 32), I only see such
allocations as the backing storage for the glib (sic) slab allocator.
It gets exercised mostly when creating UI elements, as far as I can
tell. In theory, these backing allocations should be really long-term
and somewhat limited, so the fragmentation peculiar to aligned
allocations issue should not be a concern.
There is actually a glibc patch floating around that fixes the aligned
allocation problem, at some (hopefully limited) performance cost to
aligned allocations. We want to get it reviewed and integrated into
upstream glibc. If there is a working reproducer, we could run it
against a patched glibc.
The other issue we have is that thread counts has exceeded in recent
times more than system memory, and glibc basically scales RSS overhead
with thread count, not memory. A use of libgomp suggests that many
threads might indeed be spawned. If their lifetimes overlap, it would
not be unheard of to end up with some RSS overhead in the order of
peak-usage-per-thread times 8 times the number of hardware threads
supported by the system. Setting MALLOC_ARENA_MAX to a small value
counteracts that, so it's very simple to experiment with it if you have
a working reproducer.
Thanks,
Florian
--
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-16 20:42 ` Florian Weimer
@ 2020-11-17 15:45 ` Eli Zaretskii
2020-11-17 16:32 ` Carlos O'Donell
2020-11-17 16:33 ` Florian Weimer
0 siblings, 2 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 15:45 UTC (permalink / raw)
To: Florian Weimer; +Cc: carlos, 43389, dj
> From: Florian Weimer <fweimer@redhat.com>
> Cc: carlos@redhat.com, dj@redhat.com, 43389@debbugs.gnu.org
> Date: Mon, 16 Nov 2020 21:42:39 +0100
>
> * Eli Zaretskii:
>
> > For some time (several months, I think) we have reports from Emacs
> > users that the memory footprints of their Emacs sessions sometimes
> > start growing very quickly, from several hundreds of MBytes to several
> > gigabytes in a day or even just few hours, and in some cases causing
> > the OOMK to kick in and kill the Emacs process.
>
> The last time I saw this was a genuine memory leak in the Emacs C code.
That's always a possibility. However, 2 aspects of these bug reports
seem to hint that there's more here than meets the eye:
. the problem happens only to a small number of people, and it is
hard to find an area in Emacs that would use memory in some special
enough way to happen rarely
. the Emacs sessions of the people who reported this would run for
many days and even weeks on end with fairly normal memory footprint
(around 500MB) that was very stable, and then suddenly begin
growing by the minute to 10 or 20 times that
> There is an issue with reusing posix_memalign allocations. On my system
> (running Emacs 27.1 as supplied by Fedora 32), I only see such
> allocations as the backing storage for the glib (sic) slab allocator.
(By "backing storage" you mean malloc calls that request large chunks
so that malloc obtains the memory from mmap? Or do you mean something
else?)
Are the problems with posix_memalign also relevant to calls to
aligned_alloc? Emacs calls the latter _a_lot_, see lisp_align_malloc.
> It gets exercised mostly when creating UI elements, as far as I can
> tell.
I guess your build uses GTK as the toolkit?
> There is actually a glibc patch floating around that fixes the aligned
> allocation problem, at some (hopefully limited) performance cost to
> aligned allocations. We want to get it reviewed and integrated into
> upstream glibc. If there is a working reproducer, we could run it
> against a patched glibc.
We don't have a reproducer, but several people said that the problem
happens to them regularly enough in their normal usage. So I think we
can ask them to try a patches glibc and see if the problem goes away.
> The other issue we have is that thread counts has exceeded in recent
> times more than system memory, and glibc basically scales RSS overhead
> with thread count, not memory. A use of libgomp suggests that many
> threads might indeed be spawned. If their lifetimes overlap, it would
> not be unheard of to end up with some RSS overhead in the order of
> peak-usage-per-thread times 8 times the number of hardware threads
> supported by the system. Setting MALLOC_ARENA_MAX to a small value
> counteracts that, so it's very simple to experiment with it if you have
> a working reproducer.
"Small value" being something like 2?
Emacs doesn't use libgomp, I think that comes from ImageMagick, and
most people who reported these problems use Emacs that wasn't built
with ImageMagick. The only other source of threads in Emacs I know of
is GTK, but AFAIK it starts a small number of them, like 4.
In any case, experimenting with MALLOC_ARENA_MAX is easy, so I think
we should ask the people who experience this to try that.
Any other suggestions or thoughts?
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 15:45 ` Eli Zaretskii
@ 2020-11-17 16:32 ` Carlos O'Donell
2020-11-17 17:13 ` Eli Zaretskii
2020-11-17 16:33 ` Florian Weimer
1 sibling, 1 reply; 110+ messages in thread
From: Carlos O'Donell @ 2020-11-17 16:32 UTC (permalink / raw)
To: Eli Zaretskii, Florian Weimer; +Cc: 43389, dj
On 11/17/20 10:45 AM, Eli Zaretskii wrote:
>> From: Florian Weimer <fweimer@redhat.com>
>> Cc: carlos@redhat.com, dj@redhat.com, 43389@debbugs.gnu.org
>> Date: Mon, 16 Nov 2020 21:42:39 +0100
>> There is an issue with reusing posix_memalign allocations. On my system
>> (running Emacs 27.1 as supplied by Fedora 32), I only see such
>> allocations as the backing storage for the glib (sic) slab allocator.
>
> (By "backing storage" you mean malloc calls that request large chunks
> so that malloc obtains the memory from mmap? Or do you mean something
> else?)
In this case I expect Florian means that glib (sic), which is a slab
allocator, needs to allocate an aligned slab (long lived) and so uses
posix_memalign to create such an allocation. Therefore these long-lived
aligned allocations should not cause significant internal fragmentation.
> Are the problems with posix_memalign also relevant to calls to
> aligned_alloc? Emacs calls the latter _a_lot_, see lisp_align_malloc.
All aligned allocations suffer from an algorithmic defect that causes
subsequent allocations of the same alignment to be unable to use previously
free'd aligned chunks. This causes aligned allocations to internally
fragment the heap and this internal fragmentation could spread to the
entire heap and cause heap growth.
The WIP glibc patch is here (June 2019):
https://lists.fedoraproject.org/archives/list/glibc@lists.fedoraproject.org/thread/2PCHP5UWONIOAEUG34YBAQQYD7JL5JJ4/
>> The other issue we have is that thread counts has exceeded in recent
>> times more than system memory, and glibc basically scales RSS overhead
>> with thread count, not memory. A use of libgomp suggests that many
>> threads might indeed be spawned. If their lifetimes overlap, it would
>> not be unheard of to end up with some RSS overhead in the order of
>> peak-usage-per-thread times 8 times the number of hardware threads
>> supported by the system. Setting MALLOC_ARENA_MAX to a small value
>> counteracts that, so it's very simple to experiment with it if you have
>> a working reproducer.
>
> "Small value" being something like 2?
The current code creates 8 arenas per core on a 64-bit system.
You could set it to 1 arena per core to force more threads into the
arenas and push them to reuse more chunks.
export MALLOC_ARENA_MAX=$(nproc)
And see if that helps.
> Emacs doesn't use libgomp, I think that comes from ImageMagick, and
> most people who reported these problems use Emacs that wasn't built
> with ImageMagick. The only other source of threads in Emacs I know of
> is GTK, but AFAIK it starts a small number of them, like 4.
>
> In any case, experimenting with MALLOC_ARENA_MAX is easy, so I think
> we should ask the people who experience this to try that.
>
> Any other suggestions or thoughts?
Yes, we have malloc trace utilities for capturing and simulating traces
from applications:
https://pagure.io/glibc-malloc-trace-utils
If you can capture the application allocations with the tracer then we
should be able to reproduce it locally and observe the problem.
--
Cheers,
Carlos.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 16:32 ` Carlos O'Donell
@ 2020-11-17 17:13 ` Eli Zaretskii
2020-11-17 17:20 ` DJ Delorie
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 17:13 UTC (permalink / raw)
To: Carlos O'Donell; +Cc: fweimer, 43389, dj
> Cc: dj@redhat.com, 43389@debbugs.gnu.org
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Tue, 17 Nov 2020 11:32:23 -0500
>
> > "Small value" being something like 2?
>
> The current code creates 8 arenas per core on a 64-bit system.
>
> You could set it to 1 arena per core to force more threads into the
> arenas and push them to reuse more chunks.
>
> export MALLOC_ARENA_MAX=$(nproc)
Isn't that too many? Emacs is a single-threaded program, with a small
number of GTK threads that aren't supposed to allocate a lot of
memory. Sounds like 2 should be enough, no?
> > Any other suggestions or thoughts?
>
> Yes, we have malloc trace utilities for capturing and simulating traces
> from applications:
>
> https://pagure.io/glibc-malloc-trace-utils
>
> If you can capture the application allocations with the tracer then we
> should be able to reproduce it locally and observe the problem.
You mean, trace all the memory allocations in Emacs with the tracer?
That would produce huge amounts of data, as Emacs calls malloc at an
insane frequency. Or maybe I don't understand what kind of tracing
procedure you had in mind (I never used these tools, and didn't know
they existed until you pointed to them).
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 17:13 ` Eli Zaretskii
@ 2020-11-17 17:20 ` DJ Delorie
2020-11-17 19:52 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: DJ Delorie @ 2020-11-17 17:20 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, fweimer, 43389
Eli Zaretskii <eliz@gnu.org> writes:
> You mean, trace all the memory allocations in Emacs with the tracer?
> That would produce huge amounts of data, as Emacs calls malloc at an
> insane frequency. Or maybe I don't understand what kind of tracing
> procedure you had in mind
That's exactly what it does, and yes, it easily generates gigabytes
(sometimes terabytes) of trace information. But it also captures the
most accurate view of what's going on, and lets us replay (via
simulation) all the malloc API calls, so we can reproduce most
malloc-related problems on a whim.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 17:20 ` DJ Delorie
@ 2020-11-17 19:52 ` Eli Zaretskii
2020-11-17 19:59 ` DJ Delorie
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 19:52 UTC (permalink / raw)
To: DJ Delorie; +Cc: carlos, fweimer, 43389
> From: DJ Delorie <dj@redhat.com>
> Cc: carlos@redhat.com, fweimer@redhat.com, 43389@debbugs.gnu.org
> Date: Tue, 17 Nov 2020 12:20:21 -0500
>
> Eli Zaretskii <eliz@gnu.org> writes:
> > You mean, trace all the memory allocations in Emacs with the tracer?
> > That would produce huge amounts of data, as Emacs calls malloc at an
> > insane frequency. Or maybe I don't understand what kind of tracing
> > procedure you had in mind
>
> That's exactly what it does, and yes, it easily generates gigabytes
> (sometimes terabytes) of trace information. But it also captures the
> most accurate view of what's going on, and lets us replay (via
> simulation) all the malloc API calls, so we can reproduce most
> malloc-related problems on a whim.
Is it possible to start tracing only when the fast growth of memory
footprint commences? Or is tracing from the very beginning a
necessity for providing meaningful data?
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 19:52 ` Eli Zaretskii
@ 2020-11-17 19:59 ` DJ Delorie
2020-11-17 20:13 ` Florian Weimer
0 siblings, 1 reply; 110+ messages in thread
From: DJ Delorie @ 2020-11-17 19:59 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, fweimer, 43389
Eli Zaretskii <eliz@gnu.org> writes:
> Is it possible to start tracing only when the fast growth of memory
> footprint commences? Or is tracing from the very beginning a
> necessity for providing meaningful data?
Well, both. The API allows you to start/stop tracing whenever you like,
but the state of your heap depends on the entire history of calls.
So, for example, a trace during the "fast growth" period might show a
pattern that helps us[*] debug the problem, but if we want to
*reproduce* the problem, we'd need a full trace.
[*] and by "us" I mostly mean "emacs developers who understand their
code" ;-)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 19:59 ` DJ Delorie
@ 2020-11-17 20:13 ` Florian Weimer
2020-11-17 20:16 ` DJ Delorie
0 siblings, 1 reply; 110+ messages in thread
From: Florian Weimer @ 2020-11-17 20:13 UTC (permalink / raw)
To: DJ Delorie; +Cc: carlos, 43389
* DJ Delorie:
> Eli Zaretskii <eliz@gnu.org> writes:
>> Is it possible to start tracing only when the fast growth of memory
>> footprint commences? Or is tracing from the very beginning a
>> necessity for providing meaningful data?
>
> Well, both. The API allows you to start/stop tracing whenever you like,
> but the state of your heap depends on the entire history of calls.
>
> So, for example, a trace during the "fast growth" period might show a
> pattern that helps us[*] debug the problem, but if we want to
> *reproduce* the problem, we'd need a full trace.
>
> [*] and by "us" I mostly mean "emacs developers who understand their
> code" ;-)
But how helpful would that be, given that malloc_info does not really
show any inactive memory (discounting my 200 MiB hole)?
We would need a comparable tracer for the Lisp-level allocator, I think.
Thanks,
Florian
--
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:13 ` Florian Weimer
@ 2020-11-17 20:16 ` DJ Delorie
2020-11-17 20:27 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: DJ Delorie @ 2020-11-17 20:16 UTC (permalink / raw)
To: Florian Weimer; +Cc: carlos, 43389
Florian Weimer <fweimer@redhat.com> writes:
> But how helpful would that be, given that malloc_info does not really
> show any inactive memory (discounting my 200 MiB hole)?
One doesn't know how helpful until after looking at the data. If RSS is
going up fast, something is calling either sbrk or mmap. If that thing
is malloc, a trace tells us if there's a pattern. If that pattern
blames the lisp allocator, my job here is done ;-)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:16 ` DJ Delorie
@ 2020-11-17 20:27 ` Eli Zaretskii
2020-11-17 20:35 ` Florian Weimer
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 20:27 UTC (permalink / raw)
To: DJ Delorie; +Cc: fweimer, carlos, 43389
> From: DJ Delorie <dj@redhat.com>
> Cc: eliz@gnu.org, carlos@redhat.com, 43389@debbugs.gnu.org
> Date: Tue, 17 Nov 2020 15:16:11 -0500
>
> Florian Weimer <fweimer@redhat.com> writes:
> > But how helpful would that be, given that malloc_info does not really
> > show any inactive memory (discounting my 200 MiB hole)?
>
> One doesn't know how helpful until after looking at the data. If RSS is
> going up fast, something is calling either sbrk or mmap. If that thing
> is malloc, a trace tells us if there's a pattern. If that pattern
> blames the lisp allocator, my job here is done ;-)
I won't hold my breath for the lisp allocator to take the blame. A
couple of people who were hit by the problem reported the statistics
of Lisp objects as produced by GC (those reports are somewhere in the
bug discussions, you should be able to find them). Those statistics
indicated a very moderate amount of live Lisp objects, nowhere near
the huge memory footprint.
(It would be interesting to see the GC statistics from Florian's
session, btw.)
Given this data, it seems that if the Lisp allocator is involved, the
real problem is in what happens with memory it frees when objects are
GC'ed.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:27 ` Eli Zaretskii
@ 2020-11-17 20:35 ` Florian Weimer
2020-11-17 20:43 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Florian Weimer @ 2020-11-17 20:35 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, 43389, DJ Delorie
* Eli Zaretskii:
> (It would be interesting to see the GC statistics from Florian's
> session, btw.)
Is this the value of (garbage-collect)?
((conses 16 1877807 263442)
(symbols 48 40153 113)
(strings 32 164110 77752)
(string-bytes 1 5874689)
(vectors 16 64666)
(vector-slots 8 1737780 331974)
(floats 8 568 1115)
(intervals 56 163746 19749)
(buffers 1000 1092))
Thanks,
Florian
--
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:35 ` Florian Weimer
@ 2020-11-17 20:43 ` Eli Zaretskii
2020-11-17 20:58 ` Florian Weimer
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 20:43 UTC (permalink / raw)
To: Florian Weimer; +Cc: carlos, 43389, dj
> From: Florian Weimer <fweimer@redhat.com>
> Cc: DJ Delorie <dj@redhat.com>, carlos@redhat.com, 43389@debbugs.gnu.org
> Date: Tue, 17 Nov 2020 21:35:54 +0100
>
> * Eli Zaretskii:
>
> > (It would be interesting to see the GC statistics from Florian's
> > session, btw.)
>
> Is this the value of (garbage-collect)?
>
> ((conses 16 1877807 263442)
> (symbols 48 40153 113)
> (strings 32 164110 77752)
> (string-bytes 1 5874689)
> (vectors 16 64666)
> (vector-slots 8 1737780 331974)
> (floats 8 568 1115)
> (intervals 56 163746 19749)
> (buffers 1000 1092))
Yes. "C-h f garbage-collect" will describe the meaning of the
numbers. AFAICT, this barely explains 70 MBytes and change of Lisp
data. (The "buffers" part excludes buffer text, but you should be
able to add that by summing the sizes shown by "C-x C-b".)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:43 ` Eli Zaretskii
@ 2020-11-17 20:58 ` Florian Weimer
2020-11-17 21:10 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Florian Weimer @ 2020-11-17 20:58 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, 43389, dj
* Eli Zaretskii:
>> From: Florian Weimer <fweimer@redhat.com>
>> Cc: DJ Delorie <dj@redhat.com>, carlos@redhat.com, 43389@debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 21:35:54 +0100
>>
>> * Eli Zaretskii:
>>
>> > (It would be interesting to see the GC statistics from Florian's
>> > session, btw.)
>>
>> Is this the value of (garbage-collect)?
>>
>> ((conses 16 1877807 263442)
>> (symbols 48 40153 113)
>> (strings 32 164110 77752)
>> (string-bytes 1 5874689)
>> (vectors 16 64666)
>> (vector-slots 8 1737780 331974)
>> (floats 8 568 1115)
>> (intervals 56 163746 19749)
>> (buffers 1000 1092))
>
> Yes. "C-h f garbage-collect" will describe the meaning of the
> numbers. AFAICT, this barely explains 70 MBytes and change of Lisp
> data. (The "buffers" part excludes buffer text, but you should be
> able to add that by summing the sizes shown by "C-x C-b".)
I get this:
(let ((size 0))
(dolist (buffer (buffer-list) size)
(setq size (+ size (buffer-size buffer)))))
⇒ 98249826
So it's not a small number, but still far away from those 800 MiB.
Thanks,
Florian
--
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:58 ` Florian Weimer
@ 2020-11-17 21:10 ` Eli Zaretskii
2020-11-18 5:43 ` Carlos O'Donell
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 21:10 UTC (permalink / raw)
To: Florian Weimer; +Cc: carlos, 43389, dj
> From: Florian Weimer <fweimer@redhat.com>
> Cc: dj@redhat.com, carlos@redhat.com, 43389@debbugs.gnu.org
> Date: Tue, 17 Nov 2020 21:58:57 +0100
>
> (let ((size 0))
> (dolist (buffer (buffer-list) size)
> (setq size (+ size (buffer-size buffer)))))
> ⇒ 98249826
>
> So it's not a small number, but still far away from those 800 MiB.
Yes. I have a very similar value: 94642916 (in 376 buffers; you have
more than 1000). This is in a session that runs for 17 days and whose
VM size is 615 MB: a "normal" size for a long-living session, nowhere
near 2GB, let alone 11GB someone reported.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 21:10 ` Eli Zaretskii
@ 2020-11-18 5:43 ` Carlos O'Donell
2020-11-18 6:09 ` Jean Louis
2020-11-18 18:01 ` Eli Zaretskii
0 siblings, 2 replies; 110+ messages in thread
From: Carlos O'Donell @ 2020-11-18 5:43 UTC (permalink / raw)
To: Eli Zaretskii, Florian Weimer; +Cc: 43389, dj
On 11/17/20 4:10 PM, Eli Zaretskii wrote:
>> From: Florian Weimer <fweimer@redhat.com>
>> Cc: dj@redhat.com, carlos@redhat.com, 43389@debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 21:58:57 +0100
>>
>> (let ((size 0))
>> (dolist (buffer (buffer-list) size)
>> (setq size (+ size (buffer-size buffer)))))
>> ⇒ 98249826
>>
>> So it's not a small number, but still far away from those 800 MiB.
>
> Yes. I have a very similar value: 94642916 (in 376 buffers; you have
> more than 1000). This is in a session that runs for 17 days and whose
> VM size is 615 MB: a "normal" size for a long-living session, nowhere
> near 2GB, let alone 11GB someone reported.
If you get us a data trace I will run it through the simulator and produce
a report that includes graphs explaining the results of the trace and
we'll see if a smoking gun shows up.
The biggest smoking gun is a spike in RSS size without a matching Ideal
RSS (integral of API calls). This would indicate an algorithmic issue.
Usually though we can have ratcheting effects due to mixed object
liftimes and those are harder to detect and we don't have tooling for
that to look for such issues. We'd need to track chunk lifetimes.
--
Cheers,
Carlos.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 5:43 ` Carlos O'Donell
@ 2020-11-18 6:09 ` Jean Louis
2020-11-18 8:32 ` Andreas Schwab
2020-11-18 18:01 ` Eli Zaretskii
1 sibling, 1 reply; 110+ messages in thread
From: Jean Louis @ 2020-11-18 6:09 UTC (permalink / raw)
To: Carlos O'Donell; +Cc: Florian Weimer, 43389, dj
Is it recommended to collect strace with this below?
strace emacs > output 2>&1
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 6:09 ` Jean Louis
@ 2020-11-18 8:32 ` Andreas Schwab
2020-11-18 9:01 ` Jean Louis
2020-11-19 15:57 ` Carlos O'Donell
0 siblings, 2 replies; 110+ messages in thread
From: Andreas Schwab @ 2020-11-18 8:32 UTC (permalink / raw)
To: Jean Louis; +Cc: Carlos O'Donell, Florian Weimer, dj, 43389
On Nov 18 2020, Jean Louis wrote:
> Is it recommended to collect strace with this below?
>
> strace emacs > output 2>&1
It is preferable to use the -o option to decouple the strace output from
the inferior output.
Andreas.
--
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 8:32 ` Andreas Schwab
@ 2020-11-18 9:01 ` Jean Louis
2020-11-18 16:19 ` Russell Adams
2020-11-19 15:57 ` Carlos O'Donell
1 sibling, 1 reply; 110+ messages in thread
From: Jean Louis @ 2020-11-18 9:01 UTC (permalink / raw)
To: Andreas Schwab; +Cc: Carlos O'Donell, Florian Weimer, dj, 43389
* Andreas Schwab <schwab@linux-m68k.org> [2020-11-18 11:32]:
> On Nov 18 2020, Jean Louis wrote:
>
> > Is it recommended to collect strace with this below?
> >
> > strace emacs > output 2>&1
>
> It is preferable to use the -o option to decouple the strace output from
> the inferior output.
Thank you, I have seen that in options and right now I am running it
with:
#!/bin/bash
unset CDPATH
# ulimit -m 3145728
#export MALLOC_ARENA_MAX=4
date >> /home/data1/protected/tmp/emacs-debug
strace -o emacs.strace emacs >> /home/data1/protected/tmp/emacs-debug 2>&1
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 9:01 ` Jean Louis
@ 2020-11-18 16:19 ` Russell Adams
2020-11-18 17:30 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Russell Adams @ 2020-11-18 16:19 UTC (permalink / raw)
To: 43389
I'd be happy to run my Emacs with debugging to try and troubleshoot
this memory leak since it has happened twice to me. I can't yet
consistently reproduce it though. I think it's somewhere between helm
or org-caldav or slime, being in daemon mode.
Can someone summarize what debug options I should run with, recompile
with, etc to provide proper information for next time? I'd like to be
able to make an effective report when it next occurs.
On Wed, Nov 18, 2020 at 12:01:39PM +0300, Jean Louis wrote:
> * Andreas Schwab <schwab@linux-m68k.org> [2020-11-18 11:32]:
> > On Nov 18 2020, Jean Louis wrote:
> >
> > > Is it recommended to collect strace with this below?
> > >
> > > strace emacs > output 2>&1
> >
> > It is preferable to use the -o option to decouple the strace output from
> > the inferior output.
>
> Thank you, I have seen that in options and right now I am running it
> with:
>
> #!/bin/bash
> unset CDPATH
> # ulimit -m 3145728
> #export MALLOC_ARENA_MAX=4
> date >> /home/data1/protected/tmp/emacs-debug
> strace -o emacs.strace emacs >> /home/data1/protected/tmp/emacs-debug 2>&1
>
>
>
------------------------------------------------------------------
Russell Adams RLAdams@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 16:19 ` Russell Adams
@ 2020-11-18 17:30 ` Eli Zaretskii
0 siblings, 0 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-18 17:30 UTC (permalink / raw)
To: Russell Adams; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos
> Date: Wed, 18 Nov 2020 17:19:21 +0100
> From: Russell Adams <RLAdams@AdamsInfoServ.Com>
>
> I'd be happy to run my Emacs with debugging to try and troubleshoot
> this memory leak since it has happened twice to me. I can't yet
> consistently reproduce it though. I think it's somewhere between helm
> or org-caldav or slime, being in daemon mode.
>
> Can someone summarize what debug options I should run with, recompile
> with, etc to provide proper information for next time? I'd like to be
> able to make an effective report when it next occurs.
If you mean debug options for compiling Emacs, I don't think it
matters.
I suggest to try the tools pointed out here:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
and when the issue happens, collect the data and ask here where and
how to upload it for analysis.
Thanks.
P.S. Please CC the other people I added to the CC line, as I don't
think they are subscribed to the bug list, and it is important for us
to keep them in the loop, so they could help us investigate this.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 8:32 ` Andreas Schwab
2020-11-18 9:01 ` Jean Louis
@ 2020-11-19 15:57 ` Carlos O'Donell
1 sibling, 0 replies; 110+ messages in thread
From: Carlos O'Donell @ 2020-11-19 15:57 UTC (permalink / raw)
To: Andreas Schwab, Jean Louis; +Cc: Florian Weimer, 43389, dj
On 11/18/20 3:32 AM, Andreas Schwab wrote:
> On Nov 18 2020, Jean Louis wrote:
>
>> Is it recommended to collect strace with this below?
>>
>> strace emacs > output 2>&1
>
> It is preferable to use the -o option to decouple the strace output from
> the inferior output.
strace -ttt -ff -o NAME.logs BINARY
Gives timing, and follows forks to see what children are being run.
--
Cheers,
Carlos.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 5:43 ` Carlos O'Donell
2020-11-18 6:09 ` Jean Louis
@ 2020-11-18 18:01 ` Eli Zaretskii
2020-11-18 18:27 ` DJ Delorie
1 sibling, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-18 18:01 UTC (permalink / raw)
To: Carlos O'Donell; +Cc: fweimer, 43389, dj
> Cc: dj@redhat.com, 43389@debbugs.gnu.org
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Wed, 18 Nov 2020 00:43:55 -0500
>
> >> (let ((size 0))
> >> (dolist (buffer (buffer-list) size)
> >> (setq size (+ size (buffer-size buffer)))))
> >> ⇒ 98249826
> >>
> >> So it's not a small number, but still far away from those 800 MiB.
> >
> > Yes. I have a very similar value: 94642916 (in 376 buffers; you have
> > more than 1000). This is in a session that runs for 17 days and whose
> > VM size is 615 MB: a "normal" size for a long-living session, nowhere
> > near 2GB, let alone 11GB someone reported.
>
> If you get us a data trace I will run it through the simulator and produce
> a report that includes graphs explaining the results of the trace and
> we'll see if a smoking gun shows up.
If you asked Florian, then I agree that his data could be useful. If
you were asking me, then my data is not useful, because the footprint
is reasonable and never goes up to gigabyte range.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 18:01 ` Eli Zaretskii
@ 2020-11-18 18:27 ` DJ Delorie
2020-11-19 16:08 ` Carlos O'Donell
0 siblings, 1 reply; 110+ messages in thread
From: DJ Delorie @ 2020-11-18 18:27 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, fweimer, 43389
Eli Zaretskii <eliz@gnu.org> writes:
> If you asked Florian, then I agree that his data could be useful. If
> you were asking me, then my data is not useful, because the footprint
> is reasonable and never goes up to gigabyte range.
Yeah, the hard part here is capturing the actual problem. I'm running
the latest Emacs too but haven't seen the growth. Traces tend to be
more useful when the problem is reproducible in situ but really hard to
reproduce in a test environment.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 18:27 ` DJ Delorie
@ 2020-11-19 16:08 ` Carlos O'Donell
2020-11-22 20:19 ` Deus Max
0 siblings, 1 reply; 110+ messages in thread
From: Carlos O'Donell @ 2020-11-19 16:08 UTC (permalink / raw)
To: DJ Delorie, Eli Zaretskii; +Cc: fweimer, 43389
On 11/18/20 1:27 PM, DJ Delorie wrote:
> Eli Zaretskii <eliz@gnu.org> writes:
>> If you asked Florian, then I agree that his data could be useful. If
>> you were asking me, then my data is not useful, because the footprint
>> is reasonable and never goes up to gigabyte range.
>
> Yeah, the hard part here is capturing the actual problem. I'm running
> the latest Emacs too but haven't seen the growth. Traces tend to be
> more useful when the problem is reproducible in situ but really hard to
> reproduce in a test environment.
My commitment is this: If anyone can reproduce the problem with the tracer
enabled then I will analyze the trace and produce a report for the person
submitting the trace.
The report will include some graphs, and an analysis of the API calls and
the resulting RSS usage.
I've written several of these reports, but so far they haven't been all
that satisfying to read. We rarely find an easily discoverable root cause.
We probably need better information on the exact lifetimes of the the
allocations.
For example I recently added a "caller" frame trace which uses the dwarf
unwinder to find the caller and record that data. It's expensive and on
only if requested. This is often useful in determining who made the API
request (requires tracing through 2 frames at a minimum). The performance
loss may make the bug go away though, and so that should be considered.
--
Cheers,
Carlos.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-19 16:08 ` Carlos O'Donell
@ 2020-11-22 20:19 ` Deus Max
2020-11-23 3:26 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Deus Max @ 2020-11-22 20:19 UTC (permalink / raw)
To: Carlos O'Donell; +Cc: fweimer, 43389, DJ Delorie
On Thu, Nov 19 2020, Carlos O'Donell wrote:
> On 11/18/20 1:27 PM, DJ Delorie wrote:
>> Eli Zaretskii <eliz@gnu.org> writes:
>>> If you asked Florian, then I agree that his data could be useful. If
>>> you were asking me, then my data is not useful, because the footprint
>>> is reasonable and never goes up to gigabyte range.
>>
>> Yeah, the hard part here is capturing the actual problem. I'm running
>> the latest Emacs too but haven't seen the growth. Traces tend to be
>> more useful when the problem is reproducible in situ but really hard to
>> reproduce in a test environment.
>
> My commitment is this: If anyone can reproduce the problem with the tracer
> enabled then I will analyze the trace and produce a report for the person
> submitting the trace.
>
My emacs has been experiencing leaks and crashes very often. Both at
home and at work. This is very annoying. Can hear the fan, suddenly
"noising"-up or the keys not responding.... and oh-oh, that... here we
go again, feeling comes back.
If it is easy to provide instructions/recommendations on how to run
Emacs for producing a usefull trace report, I will be happy to do so.
Even to recompile as needed.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-22 20:19 ` Deus Max
@ 2020-11-23 3:26 ` Eli Zaretskii
2020-11-23 16:45 ` Deus Max
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-23 3:26 UTC (permalink / raw)
To: Dias Badekas; +Cc: carlos, fweimer, dj, 43389
> From: Deus Max <deusmax@gmx.com>
> Cc: DJ Delorie <dj@redhat.com>, Eli Zaretskii <eliz@gnu.org>,
> fweimer@redhat.com, 43389@debbugs.gnu.org
> Date: Sun, 22 Nov 2020 22:19:29 +0200
>
> If it is easy to provide instructions/recommendations on how to run
> Emacs for producing a usefull trace report, I will be happy to do so.
> Even to recompile as needed.
Carlos provided a pointer to the tracing tools, see
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
There are some instructions there; if something is not clear enough, I
suggest to ask specific questions here.
Thanks.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-23 3:26 ` Eli Zaretskii
@ 2020-11-23 16:45 ` Deus Max
2020-11-23 17:07 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Deus Max @ 2020-11-23 16:45 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, fweimer, dj, 43389
On Mon, Nov 23 2020, Eli Zaretskii wrote:
>> From: Deus Max <deusmax@gmx.com>
>> Cc: DJ Delorie <dj@redhat.com>, Eli Zaretskii <eliz@gnu.org>,
>> fweimer@redhat.com, 43389@debbugs.gnu.org
>> Date: Sun, 22 Nov 2020 22:19:29 +0200
>>
>> If it is easy to provide instructions/recommendations on how to run
>> Emacs for producing a usefull trace report, I will be happy to do so.
>> Even to recompile as needed.
>
> Carlos provided a pointer to the tracing tools, see
>
> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
>
> There are some instructions there; if something is not clear enough, I
> suggest to ask specific questions here.
>
> Thanks.
Will read and try it out.
Thank you.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 15:45 ` Eli Zaretskii
2020-11-17 16:32 ` Carlos O'Donell
@ 2020-11-17 16:33 ` Florian Weimer
2020-11-17 17:08 ` Eli Zaretskii
1 sibling, 1 reply; 110+ messages in thread
From: Florian Weimer @ 2020-11-17 16:33 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: carlos, 43389, dj
* Eli Zaretskii:
>> There is an issue with reusing posix_memalign allocations. On my system
>> (running Emacs 27.1 as supplied by Fedora 32), I only see such
>> allocations as the backing storage for the glib (sic) slab allocator.
>
> (By "backing storage" you mean malloc calls that request large chunks
> so that malloc obtains the memory from mmap? Or do you mean something
> else?)
Larger chunks that are split up by the glib allocator. Whether they are
allocated by mmap is unclear.
> Are the problems with posix_memalign also relevant to calls to
> aligned_alloc? Emacs calls the latter _a_lot_, see lisp_align_malloc.
Ahh. I don't see many such calls, even during heavy Gnus usage. But
opening really large groups triggers such calls.
aligned_alloc is equally problematic. I don't know if the Emacs
allocation pattern triggers the pathological behavior.
I seem to suffer from the problem as well. glibc malloc currently maintains
more than 200 MiB of unused memory:
<size from="1065345" to="153025249" total="226688532" count="20"/>
<total type="fast" count="0" size="0"/>
<total type="rest" count="3802" size="238948201"/>
Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.
It's possible to generate such statistics using GDB, by calling the
malloc_info function.
My Emacs process does not look like it suffered from the aligned_alloc
issue. It would leave behind many smaller, unused allocations, not such
large ones.
>> It gets exercised mostly when creating UI elements, as far as I can
>> tell.
>
> I guess your build uses GTK as the toolkit?
I think so:
GNU Emacs 27.1 (build 1, x86_64-redhat-linux-gnu, GTK+ Version
3.24.21, cairo version 1.16.0) of 2020-08-20
>> The other issue we have is that thread counts has exceeded in recent
>> times more than system memory, and glibc basically scales RSS overhead
>> with thread count, not memory. A use of libgomp suggests that many
>> threads might indeed be spawned. If their lifetimes overlap, it would
>> not be unheard of to end up with some RSS overhead in the order of
>> peak-usage-per-thread times 8 times the number of hardware threads
>> supported by the system. Setting MALLOC_ARENA_MAX to a small value
>> counteracts that, so it's very simple to experiment with it if you have
>> a working reproducer.
>
> "Small value" being something like 2?
Yes, that would be a good start. But my Emacs process isn't affected by
this, so this setting wouldn't help there.
Thanks,
Florian
--
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 16:33 ` Florian Weimer
@ 2020-11-17 17:08 ` Eli Zaretskii
2020-11-17 17:24 ` Florian Weimer
2020-11-17 20:39 ` Jean Louis
0 siblings, 2 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 17:08 UTC (permalink / raw)
To: Florian Weimer, Trevor Bentley; +Cc: carlos, 43389, dj
> From: Florian Weimer <fweimer@redhat.com>
> Cc: carlos@redhat.com, dj@redhat.com, 43389@debbugs.gnu.org
> Date: Tue, 17 Nov 2020 17:33:13 +0100
>
> <size from="1065345" to="153025249" total="226688532" count="20"/>
>
> <total type="fast" count="0" size="0"/>
> <total type="rest" count="3802" size="238948201"/>
>
> Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.
Yes, I wouldn't expect to see such a large footprint. How long is
this session running? (You can use "M-x emacs-uptime" to answer
that.)
> It's possible to generate such statistics using GDB, by calling the
> malloc_info function.
Emacs 28 (from the master branch) has recently acquired the
malloc-info command which will emit this to stderr. You can see one
example of its output here:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=44666#5
which doesn't seem to show any significant amounts of free memory at
all?
I encourage all the people who reported similar problems to try the
measures mentioned by Florian and Carlos, including malloc-info, and
report the results.
> My Emacs process does not look like it suffered from the aligned_alloc
> issue. It would leave behind many smaller, unused allocations, not such
> large ones.
> [...]
> >> supported by the system. Setting MALLOC_ARENA_MAX to a small value
> >> counteracts that, so it's very simple to experiment with it if you have
> >> a working reproducer.
> >
> > "Small value" being something like 2?
>
> Yes, that would be a good start. But my Emacs process isn't affected by
> this, so this setting wouldn't help there.
So both known problems seem to be not an issue in your case. What
other reasons could cause that?
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 17:08 ` Eli Zaretskii
@ 2020-11-17 17:24 ` Florian Weimer
2020-11-17 20:39 ` Jean Louis
1 sibling, 0 replies; 110+ messages in thread
From: Florian Weimer @ 2020-11-17 17:24 UTC (permalink / raw)
To: Eli Zaretskii
Cc: 43389, Jean Louis, dj, michael_heerdegen, Trevor Bentley, carlos
* Eli Zaretskii:
>> From: Florian Weimer <fweimer@redhat.com>
>> Cc: carlos@redhat.com, dj@redhat.com, 43389@debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 17:33:13 +0100
>>
>> <size from="1065345" to="153025249" total="226688532" count="20"/>
>>
>> <total type="fast" count="0" size="0"/>
>> <total type="rest" count="3802" size="238948201"/>
>>
>> Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.
>
> Yes, I wouldn't expect to see such a large footprint. How long is
> this session running? (You can use "M-x emacs-uptime" to answer
> that.)
15 days.
>> It's possible to generate such statistics using GDB, by calling the
>> malloc_info function.
>
> Emacs 28 (from the master branch) has recently acquired the
> malloc-info command which will emit this to stderr. You can see one
> example of its output here:
>
> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=44666#5
>
> which doesn't seem to show any significant amounts of free memory at
> all?
No, these values look suspiciously good.
But I seem to have this issue as well—with the 800 MiB that are actually
in use. The glibc malloc pathological behavior comes on top of that.
Is there something comparable to malloc-info to dump the Emacs allocator
freelists?
> So both known problems seem to be not an issue in your case. What
> other reasons could cause that?
Large allocations not getting forwarded to mmap, almost all of them
freed, but a late allocation remained. This prevents returning memory
from the main arena to the operating system.
Thanks,
Florian
--
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 17:08 ` Eli Zaretskii
2020-11-17 17:24 ` Florian Weimer
@ 2020-11-17 20:39 ` Jean Louis
2020-11-17 20:57 ` DJ Delorie
1 sibling, 1 reply; 110+ messages in thread
From: Jean Louis @ 2020-11-17 20:39 UTC (permalink / raw)
To: Eli Zaretskii
Cc: Florian Weimer, 43389, Jean Louis, dj, michael_heerdegen,
Trevor Bentley, carlos
* Eli Zaretskii <eliz@gnu.org> [2020-11-17 20:09]:
> I encourage all the people who reported similar problems to try the
> measures mentioned by Florian and Carlos, including malloc-info, and
> report the results.
For now I am doing with:
export MALLOC_ARENA_MAX=4
After days I will tell more.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:39 ` Jean Louis
@ 2020-11-17 20:57 ` DJ Delorie
2020-11-17 21:45 ` Jean Louis
0 siblings, 1 reply; 110+ messages in thread
From: DJ Delorie @ 2020-11-17 20:57 UTC (permalink / raw)
To: Jean Louis; +Cc: fweimer, 43389, bugs, michael_heerdegen, trevor, carlos
Jean Louis <bugs@gnu.support> writes:
> After days I will tell more.
Do we have any strong hints on things we (i.e. I) can do to cause this
to happen faster?
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 20:57 ` DJ Delorie
@ 2020-11-17 21:45 ` Jean Louis
2020-11-18 15:03 ` Eli Zaretskii
0 siblings, 1 reply; 110+ messages in thread
From: Jean Louis @ 2020-11-17 21:45 UTC (permalink / raw)
To: DJ Delorie; +Cc: fweimer, 43389, carlos, trevor, michael_heerdegen
* DJ Delorie <dj@redhat.com> [2020-11-17 23:57]:
> Jean Louis <bugs@gnu.support> writes:
> > After days I will tell more.
>
> Do we have any strong hints on things we (i.e. I) can do to cause this
> to happen faster?
This is because I cannot know when is it happening. In general it was
taking place almost all the time under EXWM (Emacs X Window Manager)
then I switched to IceWM just to see if it is problem that EXWM is
invoking. Now in IceWM I got it 3 times, but much less times than in
EXWM and I do not see that I anyhow have changed my habits of using
Emacs.
Today I had more than 10 hours session and then what I did? I do not
know exactly. I have kept only XTerm and Emacs on X, at some point of
time it starts using swap but it is unclear to me if it uses swap or
does something else with the disk. Some minutes before that I was
inspecting it with htop and found Emacs with 9.7 GB memory. Later
system was unusable.
All I could see during that time is hard disk LED turned on all the
time. I cannot do almost nothing, I cannot interrupt Emacs or switch
to console. Then I use Magic SysRq and do the necessary to at least
synchronize hard disks, unmount and reboot.
I am running it with this script:
#!/bin/bash
# CDPATH invokes bugs in eshell, not related to this
unset CDPATH
# I was trying to tune ulimit -m but it did not help
# ulimit -m 3145728
# I am trying this now
export MALLOC_ARENA_MAX=4
date >> /home/data1/protected/tmp/emacs-debug
# This below is for M-x malloc-info
emacs >> /home/data1/protected/tmp/emacs-debug 2>&1
Maybe some simple new and automatic function could be temporarily
included to spit errors to output on what is Emacs doing when it
starts swapping (if it is swapping), then such errors could at least
be captured in a file even if I have to reboot computer.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-17 21:45 ` Jean Louis
@ 2020-11-18 15:03 ` Eli Zaretskii
2020-11-23 18:55 ` Jean Louis
0 siblings, 1 reply; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-18 15:03 UTC (permalink / raw)
To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos
> Date: Wed, 18 Nov 2020 00:45:48 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: eliz@gnu.org, fweimer@redhat.com, trevor@trevorbentley.com,
> michael_heerdegen@web.de, carlos@redhat.com, 43389@debbugs.gnu.org
>
> Maybe some simple new and automatic function could be temporarily
> included to spit errors to output on what is Emacs doing when it
> starts swapping (if it is swapping), then such errors could at least
> be captured in a file even if I have to reboot computer.
Emacs doesn't know when the system starts swapping. But you can write
a function that tracks the vsize of the Emacs process, using emacs-pid
and process-attributes, and displays some prominent message when the
vsize increments become larger than some threshold, or the vsize
itself becomes larger than some fixed number. Then run this function
off a timer that fires every 10 or 15 seconds, and wait for it to tell
you when the fun starts.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 15:03 ` Eli Zaretskii
@ 2020-11-23 18:55 ` Jean Louis
0 siblings, 0 replies; 110+ messages in thread
From: Jean Louis @ 2020-11-23 18:55 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos
* Eli Zaretskii <eliz@gnu.org> [2020-11-18 18:04]:
> > Date: Wed, 18 Nov 2020 00:45:48 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: eliz@gnu.org, fweimer@redhat.com, trevor@trevorbentley.com,
> > michael_heerdegen@web.de, carlos@redhat.com, 43389@debbugs.gnu.org
> >
> > Maybe some simple new and automatic function could be temporarily
> > included to spit errors to output on what is Emacs doing when it
> > starts swapping (if it is swapping), then such errors could at least
> > be captured in a file even if I have to reboot computer.
I use now M-x vsize-with-timer for 2GB and M-x good-bye to capture
that basic data.
(defun vsize-value ()
(let* ((attributes (process-attributes (emacs-pid)))
(vsize-name (car (elt attributes 5)))
(vsize-value (cdr (elt attributes 5))))
(list vsize-name vsize-value)))
(defun vsize-check (&optional gb)
(let* ((vsize (cadr (vsize-value)))
(gb (or gb 2))
(gb-1 1048576.0)
(gb (* gb gb-1)))
(when (> vsize gb)
(message "vsize: %.02fG" (/ vsize gb-1)))))
(defun vsize-with-timer (gb)
(interactive "nGiB: ")
(let ((timer (run-with-timer 1 30 'vsize-check gb)))
(message "Timer: %s" timer)))
(defun good-bye ()
(interactive)
(let* ((garbage (garbage-collect))
(size 0)
(buffers-size (dolist (buffer (buffer-list) size)
(setq size (+ size (buffer-size buffer)))))
(uptime (emacs-uptime))
(pid (emacs-pid))
(vsize (vsize-value))
(file (format "~/tmp/emacs-session-%s.el" pid))
(list (list (list 'uptime uptime) (list 'pid pid)
(list 'garbage garbage) (list 'buffers-size buffers-size)
(list 'vsize vsize))))
(with-temp-file file
(insert (prin1-to-string list)))
(message file)))
^ permalink raw reply [flat|nested] 110+ messages in thread
[parent not found: <87wnyju40z.fsf@mail.trevorbentley.com>]
* bug#43389: 28.0.50; Emacs memory leaks
[not found] ` <87wnyju40z.fsf@mail.trevorbentley.com>
@ 2020-11-17 20:36 ` Eli Zaretskii
0 siblings, 0 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-17 20:36 UTC (permalink / raw)
To: Trevor Bentley; +Cc: 43389
[Please use Reply All to keep the bug tracker on the CC list.]
> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc:
> Date: Tue, 17 Nov 2020 21:22:52 +0100
>
> > . something called "gomp_thread_start" is called, and also
> > allocates
> > a lot of memory -- does this mean additional threads start
> > running?
> >
> > Or am I reading the graphs incorrectly?
>
> You are right that they are present, but that path isn't
> responsible for a significant percentage of the total memory usage
> at the end. Doesn't look like gomp_thread_start is in the
> bottom-most snapshot at all. It was reporting ~100MB allocated by
> gomp_thread_start, out of 4GB. And those are related to images,
> so 100MB is perhaps reasonable.
AFAIK, glibc's malloc allocates a new heap arena for each thread that
calls malloc. The arena is large, so having many threads could
enlarge the footprint by a lot. That's hwy Florian suggested to set
MALLOC_ARENA_MAX to a small value, to keep this path of footprint
growth in check.
> However, I'm now a bit suspicious of these log buffers. Last time
> the usage spiked I had 15MB of reported buffers, and I was
> watching the process RSS increase by 1MB every 5 seconds in top,
> like a clockwork. I killed all of the large log buffers (3MB
> each), and RSS stopped noticeably increasing. Not sure if that
> _stopped_ the leak, or only slowed it down to beneath the
> threshold top could show me. Either way, it should need 1.5GB of
> RAM to track 15MB of text.
Unless malloc somehow allocates buffer memory via sbrk and not mmap,
buffers shouldn't be part of the footprint growth issue, because any
mmap'ed memory can be munmap'ed without any restrictions, and returns
to the OS. And you can see how many buffer memory you have by
watching the statistics returned by garbage-collect.
> gomp_thread_start appears to be triggered when images are
> displayed.
Yes, I believe ImageMagick starts them to scale images.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-14 0:43 bug#43389: 28.0.50; Emacs memory leaks Michael Heerdegen
` (2 preceding siblings ...)
2020-10-29 20:17 ` Trevor Bentley
@ 2020-11-18 21:47 ` Jose A. Ortega Ruiz
2020-11-19 14:03 ` Eli Zaretskii
2020-12-09 19:41 ` Jose A. Ortega Ruiz
4 siblings, 1 reply; 110+ messages in thread
From: Jose A. Ortega Ruiz @ 2020-11-18 21:47 UTC (permalink / raw)
To: 43389
On Tue, Nov 17 2020, Eli Zaretskii wrote:
>> From: Florian Weimer <fweimer@redhat.com>
>> Cc: dj@redhat.com, carlos@redhat.com, 43389@debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 21:58:57 +0100
>>
>> (let ((size 0))
>> (dolist (buffer (buffer-list) size)
>> (setq size (+ size (buffer-size buffer)))))
>> ⇒ 98249826
>>
>> So it's not a small number, but still far away from those 800 MiB.
>
> Yes. I have a very similar value: 94642916 (in 376 buffers; you have
> more than 1000). This is in a session that runs for 17 days and whose
> VM size is 615 MB: a "normal" size for a long-living session, nowhere
> near 2GB, let alone 11GB someone reported.
As an additional datapoint, since version 27 (i usually compile from
master, so also before its release), i'm experiencing bigger RAM
consumption from my emacs processes too.
It used to always be way below 1Gb, and at some point (i have the
impression it was with the switch to pdumper), typical footprints went
up to ~2Gb.
In my case, there seems to be a jump in RAM footprint every now and then
(i get to ~1.5Gb in a day almost for sure, and 1.8Gb is not rare at
all), but they're not systematic.
Everything starts "normal" (300Mb), then i open Gnus an it grows a bit
after reading some groups (500Mb, say), and so on, and be there for a
while even if i keep using Gnus for reading similarly sized message
groups. But, at some point, quite suddenly, i see RAM going to ~1Gb,
without any obvious change in the libraries i've loaded or in my usage
of them. The pattern repeats until i find myself with ~2Gb in N days,
with N varying from 1 to 3.
It's difficult for me to be more precise because i use emacs for
absolutely everything. But, perhaps tellingly, i don't use most of the
packages that have been mentioned in this thread (in my case it's ivy
instead of helm, i use pdf-tools and that has a considerable footprint,
but i see jumps without having it loaded too, similar thing for
emacs-w3m), and i see the jumps to appear so consistently that my
impression is that they're not directly caused by a single package.
The only coincidence i've seen is that i use EXWM too (btw, that's a
window manager implemened in ELisp that makes emacs itself the window
manager, calling directly the X11 api through FFI), but other people are
having problems without it.
I've also tried with emacs compiled with and without GTK (i usually
compile without any toolkit at all) and with and without ImageMagick,
and the increased footprint is the same in all those combinations. I
cannot see either any difference between the released 27.1 and 28.0.50
regularly compile form master: both seem to misbehave in the same way.
As i mentioned above, i've got a hunch that this all started, at least
for me, with pdumper, but i must say that is most probably a red
herring.
I hope this helps a bit, despite its vagueness.
Cheers,
jao
P.S.: I'm not copying the external GCC developers in this response
because i think most of the above makes only sense to emacs developers;
please let me know if you'd rather i did copy them.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-18 21:47 ` Jose A. Ortega Ruiz
@ 2020-11-19 14:03 ` Eli Zaretskii
2020-11-19 14:34 ` Jean Louis
2020-11-19 17:25 ` jao
0 siblings, 2 replies; 110+ messages in thread
From: Eli Zaretskii @ 2020-11-19 14:03 UTC (permalink / raw)
To: Jose A. Ortega Ruiz; +Cc: 43389, carlos, fweimer, dj
> From: "Jose A. Ortega Ruiz" <jao@gnu.org>
> Date: Wed, 18 Nov 2020 21:47:30 +0000
>
> As an additional datapoint, since version 27 (i usually compile from
> master, so also before its release), i'm experiencing bigger RAM
> consumption from my emacs processes too.
>
> It used to always be way below 1Gb, and at some point (i have the
> impression it was with the switch to pdumper), typical footprints went
> up to ~2Gb.
>
> In my case, there seems to be a jump in RAM footprint every now and then
> (i get to ~1.5Gb in a day almost for sure, and 1.8Gb is not rare at
> all), but they're not systematic.
>
> Everything starts "normal" (300Mb), then i open Gnus an it grows a bit
> after reading some groups (500Mb, say), and so on, and be there for a
> while even if i keep using Gnus for reading similarly sized message
> groups. But, at some point, quite suddenly, i see RAM going to ~1Gb,
> without any obvious change in the libraries i've loaded or in my usage
> of them. The pattern repeats until i find myself with ~2Gb in N days,
> with N varying from 1 to 3.
>
> It's difficult for me to be more precise because i use emacs for
> absolutely everything. But, perhaps tellingly, i don't use most of the
> packages that have been mentioned in this thread (in my case it's ivy
> instead of helm, i use pdf-tools and that has a considerable footprint,
> but i see jumps without having it loaded too, similar thing for
> emacs-w3m), and i see the jumps to appear so consistently that my
> impression is that they're not directly caused by a single package.
Thanks. If you can afford it, would you please try using the malloc
tracing tools pointed to here:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
and then tell us where we could get the data you collected?
> As i mentioned above, i've got a hunch that this all started, at least
> for me, with pdumper, but i must say that is most probably a red
> herring.
For the record, can you please tell what flavor and version of
GNU/Linux are you using?
> P.S.: I'm not copying the external GCC developers in this response
> because i think most of the above makes only sense to emacs developers;
> please let me know if you'd rather i did copy them.
I've added them. Please CC them in the future, it is important for us
that the glibc experts see the data points people report in this
matter.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-19 14:03 ` Eli Zaretskii
@ 2020-11-19 14:34 ` Jean Louis
2020-11-19 16:03 ` Carlos O'Donell
2020-11-19 17:25 ` jao
1 sibling, 1 reply; 110+ messages in thread
From: Jean Louis @ 2020-11-19 14:34 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, carlos, fweimer, dj, Jose A. Ortega Ruiz
* Eli Zaretskii <eliz@gnu.org> [2020-11-19 17:05]:
> Thanks. If you can afford it, would you please try using the malloc
> tracing tools pointed to here:
>
> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
I have built it. Slight problem is that I do not get any output as
written that I should get, something like this:
mtrace: writing to /tmp/mtrace.mtr.706
I do not see here:
LD_PRELOAD=./libmtrace.so ls
block_size_rss.c INSTALL mtrace.c trace2wl.c trace_hist.sh
config.log libmtrace.so mtrace.h trace_allocs trace_plot.m
config.status LICENSES README.md trace_allocs.c trace_run
configure MAINTAINERS sample.c trace_analysis.sh trace_run.c
configure.ac Makefile statistics.c trace_block_size_rss trace_sample
COPYING Makefile.in tests trace_dump trace_statistics
COPYING.LIB malloc.h trace2wl trace_dump.c util.h
But I did get something in /tmp/mtrace.mtr.XXX
So I will run Emacs that way.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-19 14:34 ` Jean Louis
@ 2020-11-19 16:03 ` Carlos O'Donell
0 siblings, 0 replies; 110+ messages in thread
From: Carlos O'Donell @ 2020-11-19 16:03 UTC (permalink / raw)
To: Jean Louis, Eli Zaretskii; +Cc: 43389, fweimer, Jose A. Ortega Ruiz, dj
On 11/19/20 9:34 AM, Jean Louis wrote:
> * Eli Zaretskii <eliz@gnu.org> [2020-11-19 17:05]:
>> Thanks. If you can afford it, would you please try using the malloc
>> tracing tools pointed to here:
>>
>> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
>
> I have built it. Slight problem is that I do not get any output as
> written that I should get, something like this:
>
> mtrace: writing to /tmp/mtrace.mtr.706
This was changed recently in commit 4594db1defd40289192a0ea641c50278277f1737
because output to stdout interferes with the application output so it is
disabled by default. The docs show that MTRACE_CTL_FILE will dictate
where the trace is written to and that MTRACE_CTL_VERBOSE will output
verbose information to stdout.
I've pushed a doc update to indicate that in the example.
> I do not see here:
>
> LD_PRELOAD=./libmtrace.so ls
> block_size_rss.c INSTALL mtrace.c trace2wl.c trace_hist.sh
> config.log libmtrace.so mtrace.h trace_allocs trace_plot.m
> config.status LICENSES README.md trace_allocs.c trace_run
> configure MAINTAINERS sample.c trace_analysis.sh trace_run.c
> configure.ac Makefile statistics.c trace_block_size_rss trace_sample
> COPYING Makefile.in tests trace_dump trace_statistics
> COPYING.LIB malloc.h trace2wl trace_dump.c util.h
>
> But I did get something in /tmp/mtrace.mtr.XXX
>
> So I will run Emacs that way.
That should work.
--
Cheers,
Carlos.
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-11-19 14:03 ` Eli Zaretskii
2020-11-19 14:34 ` Jean Louis
@ 2020-11-19 17:25 ` jao
1 sibling, 0 replies; 110+ messages in thread
From: jao @ 2020-11-19 17:25 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: 43389, carlos, fweimer, dj
On Thu, Nov 19 2020, Eli Zaretskii wrote:
[...]
> Thanks. If you can afford it, would you please try using the malloc
> tracing tools pointed to here:
>
> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
>
> and then tell us where we could get the data you collected?
i'll see what i can do, yes (possibly over the weekend).
>> As i mentioned above, i've got a hunch that this all started, at least
>> for me, with pdumper, but i must say that is most probably a red
>> herring.
>
> For the record, can you please tell what flavor and version of
> GNU/Linux are you using?
Debian sid.
Cheers,
jao
--
If you could kick in the pants the person responsible for most of your
trouble, you wouldn't sit for a month. — Theodore Roosevelt
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-09-14 0:43 bug#43389: 28.0.50; Emacs memory leaks Michael Heerdegen
` (3 preceding siblings ...)
2020-11-18 21:47 ` Jose A. Ortega Ruiz
@ 2020-12-09 19:41 ` Jose A. Ortega Ruiz
2020-12-09 20:25 ` Lars Ingebrigtsen
4 siblings, 1 reply; 110+ messages in thread
From: Jose A. Ortega Ruiz @ 2020-12-09 19:41 UTC (permalink / raw)
To: 43389
On Tue, Dec 08 2020, Russell Adams wrote:
> On Tue, Dec 08, 2020 at 03:24:27AM +0000, Jose A. Ortega Ruiz wrote:
>> On Tue, Dec 08 2020, Michael Heerdegen wrote:
>>
>> > shut it down normally). I'm sure that at least a significant part of
>> > the problem materialized while using (more or less only) Gnus.
>>
>> I also have anecdotal evidence of that. Quite systematically, i start
>> emacs, things load, i'm around 300Mb or RAM, quite stable. Then i start
>> Gnus, read some groups, and, ver soon after that, while emacs is
>> basically idle, i can see RAM increasing by ~10Mb every ~10secs until it
>> reaches something like 800-900Mb.
>
> I have consistently encountered this memory leak without a clear path
> to reproducing it other than regular use over time, and I don't use
> Gnus. I read mail in Mutt in another terminal window.
>
> Thus I'm not sure Gnus is the culprit.
Neither am i :) Actually, i just observed the pattern above (RAM going
up by 1Mb/sec bringing total memory from 300Mb to 800Mb, then stopping)
before starting Gnus. So i guess that, if Gnus plays any role, it must
be indirectly.
jao
--
I don't necessarily agree with everything I say.
-Marshall McLuhan (1911-1980)
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-09 19:41 ` Jose A. Ortega Ruiz
@ 2020-12-09 20:25 ` Lars Ingebrigtsen
2020-12-09 21:04 ` Jose A. Ortega Ruiz
0 siblings, 1 reply; 110+ messages in thread
From: Lars Ingebrigtsen @ 2020-12-09 20:25 UTC (permalink / raw)
To: Jose A. Ortega Ruiz; +Cc: 43389
"Jose A. Ortega Ruiz" <jao@gnu.org> writes:
> Neither am i :) Actually, i just observed the pattern above (RAM going
> up by 1Mb/sec bringing total memory from 300Mb to 800Mb, then stopping)
> before starting Gnus. So i guess that, if Gnus plays any role, it must
> be indirectly.
I haven't been following this thread closely, but it strikes me as
puzzling that there's a lot of people seeing these leaks -- and there's
also many people (like me) that don't see these leaks at all. (And I
have Emacsen running for weeks on end, doing all sorts of odd stuff.)
Has anybody tried compiling a list of features people who see the leaks
are using? Not that there's really any good way of gathering that data,
but ... Like, helm is known for using lots of memory, and eww can, too,
under some circumstances, and so can image caching...
--
(domestic pets only, the antidote for overdose, milk.)
bloggy blog: http://lars.ingebrigtsen.no
^ permalink raw reply [flat|nested] 110+ messages in thread
* bug#43389: 28.0.50; Emacs memory leaks
2020-12-09 20:25 ` Lars Ingebrigtsen
@ 2020-12-09 21:04 ` Jose A. Ortega Ruiz
2020-12-11 13:55 ` Lars Ingebrigtsen
0 siblings, 1 reply; 110+ messages in thread
From: Jose A. Ortega Ruiz @ 2020-12-09 21:04 UTC (permalink / raw)
To: Lars Ingebrigtsen; +Cc: 43389
On Wed, Dec 09 2020, Lars Ingebrigtsen wrote:
[...]
> Has anybody tried compiling a list of features people who see the leaks
> are using? Not that there's really any good way of gathering that data,
> but ... Like, helm is known for using lots of memory, and eww can, too,
> under some circumstances, and so can image caching...
in my case, it's ivy and emacs-w3m. the first burst i observe is
usually at the beginning, so not many of the miriad other packages i use
have been active at all. i use exwm, so that's one that's always there
for sure, and ivy takes control immediately, but little else seems
"needed".
regarding images, i use pdf-tools, and it has a heavy memory footprint
(opening any PDF increases easily emacs ram consumption in 200Mb, no
matter how big the PDF). but those jumps are immediate upon opening the
doc.
in my case, another source of puzzlement is this "bursty" behaviour.
after the firs one, i can be at ~1Gb for a day or two (doing almost
everything inside emacs, so all kinds of packages used), and then,
without any change in my usage patterns i could tell, a new burst will
take my RAM, 10Mbs at a time, up to ~2Gb. and then stop, again without
me doing, concisouly, anything differently.
jao
--
To see ourselves as others see us is a most salutary gift. Hardly less
important is the capacity to see others as they see themselves.
-Aldous Huxley, novelist (1894-1963)
^ permalink raw reply [flat|nested] 110+ messages in thread