all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
* bug#43395: 28.0.50; memory leak
@ 2020-09-12  2:12                                                                                               ` Madhu
  2020-09-14 15:08                                                                                                 ` Eli Zaretskii
       [not found]                                                                                                 ` <handler.43395.D43389.161115724232582.notifdone@debbugs.gnu.org>
  0 siblings, 2 replies; 166+ messages in thread
From: Madhu @ 2020-09-12  2:12 UTC (permalink / raw)
  To: 43395


Following up on the thread:
https://lists.gnu.org/archive/html/help-gnu-emacs/2020-09/msg00147.html

There appears to be a memory leak with emacs RSS growing inordinately in
size.

$ ps o pid,rss,drs,sz,share,start_time,vsize,cmd 26285
  PID   RSS   DRS  SIZE - START    VSZ CMD
26285 2643236 2996379 2664940 - Sep09 2998948 /7/gtk/emacs/build-xt-xft/src/emacs --debug-init --daemon

I usually only notice the leak when it has gone beyond 2G - when linux
refuses to suspend because I have limited swap.  In most cases emacs
would be running for a few days.

The values reported by garbage-collect amount do not reflect the 2GB
allocation being used by emacs.

Advice on tooling is called for to instrument emacs and monitor the
system for memory changes and flag the point when the leak occurs.


In GNU Emacs 28.0.50 (build 1, x86_64-pc-linux-gnu, X toolkit, Xaw3d scroll bars)
 of 2020-09-06 built on maher
Emacs Repository revision: 6fc502c1ef327ab357c971b9bffbbd7cb6a436f1
Repository branch: madhu-tip
Windowing system distributor 'The X.Org Foundation', version 11.0.12008000
System Description: Gentoo/Linux

Configured using:
 'configure -C --with-harfbuzz --without-cairo --with-x-toolkit=athena
 --with-xft'

Configured features:
XAW3D XPM JPEG TIFF GIF PNG RSVG SOUND GPM DBUS GSETTINGS GLIB NOTIFY
INOTIFY ACL LIBSELINUX GNUTLS LIBXML2 FREETYPE HARFBUZZ M17N_FLT LIBOTF
XFT ZLIB TOOLKIT_SCROLL_BARS LUCID X11 XDBE XIM MODULES THREADS JSON
PDUMPER LCMS2

Important settings:
  value of $LC_COLLATE: C
  value of $LANG: en_US.utf8
  locale-coding-system: utf-8-unix

Major mode: Fundamental

Minor modes in effect:
  global-log4sly-mode: t
  global-magit-file-mode: t
  global-git-commit-mode: t
  async-bytecomp-package-mode: t
  other-frame-window-mode: t
  savehist-mode: t
  xclip-mode: t
  dired-single-mode: t
  save-place-mode: t
  recentf-mode: t
  show-paren-mode: t
  shell-dirtrack-mode: t
  minibuffer-depth-indicate-mode: t
  display-time-mode: t
  which-function-mode: t
  foomadhu-clear-output-mode: t
  foomadhu-translate-kbd-paren-mode: t
  new-shell-activate-mode: t
  foomadhu-mode: t
  ivy-prescient-mode: t
  prescient-persist-mode: t
  ivy-mode: t
  tooltip-mode: t
  mouse-wheel-mode: t
  file-name-shadow-mode: t
  global-font-lock-mode: t
  auto-composition-mode: t
  auto-encryption-mode: t
  auto-compression-mode: t
  line-number-mode: t
  transient-mark-mode: t

Load-path shadows:

Features:
(shadow emacsbug sendmail proced meson-mode yaml-mode idlwave
idlwave-help idlw-help desktop frameset mhtml-mode tex-mode latexenc
net-utils url-file url-dired vc-filewise cal-china lunar cal-bahai
cal-islam cal-hebrew holidays hol-loaddefs markdown-mode eieio-opt
speedbar ezimage dframe nndir tabify man wdired log-view log4sly nnagent
nnml mule-util ibuf-ext ibuffer ibuffer-loaddefs cal-julian solar
cal-dst conf-mode cl-indent dabbrev ielm html5-schema css-mode eww
url-queue mm-url scroll-lock rng-xsd xsd-regexp rng-cmpct python js
cc-mode cc-fonts cc-guess cc-menus cc-cmds cc-styles cc-align cc-engine
cc-vars cc-defs vc-rcs vc vc-dispatcher bug-reference make-mode
magit-bookmark magit-imenu git-rebase magit-extras magit-gitignore
magit-ediff ediff ediff-merg ediff-mult ediff-wind ediff-diff ediff-help
ediff-init ediff-util magit-subtree magit-patch magit-submodule
magit-obsolete magit-popup magit-blame magit-stash magit-reflog
magit-bisect magit-push magit-pull magit-fetch magit-clone magit-remote
magit-commit magit-sequence magit-notes magit-worktree magit-tag
magit-merge magit-branch magit-reset magit-files magit-refs magit-status
magit magit-repos magit-apply magit-wip magit-log magit-diff magit-core
magit-autorevert autorevert filenotify magit-margin magit-transient
magit-process magit-mode git-commit transient magit-git magit-section
magit-utils crm log-edit pcvs-util with-editor async-bytecomp async dash
vc-git sly-mk-defsystem grep sly-undefmethod sly-fancy sly-tramp
sly-stickers pulse hi-lock sly-trace-dialog sly-fontifying-fu
sly-package-fu sly-scratch sly-fancy-trace sly-fancy-inspector sly-mrepl
sly-autodoc sly-parse warnings sly-c-p-c sly-retro sly gud
sly-completion sly-buttons sly-messages sly-common apropos arc-mode
archive-mode hyperspec ebuild-mode skeleton sh-script smie executable
two-column iso-transl smerge-mode diff-mode nnfolder canlock org-element
avl-tree ol-eww ol-rmail ol-mhe ol-irc ol-info ol-gnus nnir ol-docview
doc-view jka-compr image-mode exif ol-bibtex bibtex ol-bbdb ol-w3m
org-capture flow-fill mm-archive qp view help-fns radix-tree cl-print
debug backtrace sort gnus-cite mail-extr gnus-bcklg gnus-async gnus-kill
gnus-ml epa-file gnutls nndraft nnmh nnnil gnus-agent gnus-srvr
gnus-score score-mode nnvirtual gnus-msg gnus-cache gnus-art mm-uu
mml2015 mm-view mml-smime smime dig gnus-sum url url-proxy url-privacy
url-expand url-methods url-history mailcap shr kinsoku url-cookie
url-domsuf url-util svg nntp gnus-group gnus-undo gnus-start gnus-dbus
gnus-cloud nnimap nnmail mail-source utf7 netrc nnoo gnus-spec gnus-int
gnus-range message rfc822 mml mml-sec epa epg epg-config mm-decode
mm-bodies mm-encode mail-parse rfc2231 mailabbrev gmm-utils mailheader
gnus-win misearch multi-isearch network-stream puny nsm rmc bookmark
time-stamp mew-varsx dired-aux term/xterm xterm add-log pinentry
other-frame-window lw-manual lw-manual-data-7-1-0-0 savehist xclip
elisp-slime-nav gnus nnheader gnus-util rmail rmail-loaddefs rfc2047
rfc2045 ietf-drums text-property-search mail-utils mm-util mail-prsvr
company pcase cus-start cus-load ggtags etags fileloop generator ewoc
zenicb-color zenicb-whereis zenicb-complete zenicb-stamp zenicb-history
zenicb-away zenicb zenirc-sasl erc-goodies erc erc-backend pp
erc-loaddefs zenirc-color zenirc-stamp zenirc-trigger zenirc-notify
zenirc-netsplit zenirc-ignore zenirc-history zenirc-format zenirc-dcc
zenirc-complete zenirc-command-queue zenirc-away zenirc sly-autoloads
org-mew mew-auth mew-config mew-imap2 mew-imap mew-nntp2 mew-nntp
mew-pop mew-smtp mew-ssl mew-ssh mew-net mew-highlight mew-sort mew-fib
mew-ext mew-refile mew-demo mew-attach mew-draft mew-message mew-thread
mew-virtual mew-summary4 mew-summary3 mew-summary2 mew-summary
mew-search mew-pick mew-passwd mew-scan mew-syntax mew-bq mew-smime
mew-pgp mew-header mew-exec mew-mark mew-mime mew-unix mew-edit
mew-decode mew-encode mew-cache mew-minibuf mew-complete mew-addrbook
mew-local mew-vars3 mew-vars2 mew-vars mew-env mew-mule3 mew-mule
mew-gemacs mew-key mew-func mew-blvs mew-const mew winner windmove
whitespace tramp-sh tramp tramp-loaddefs trampver tramp-integration
files-x tramp-compat ls-lisp ange-ftp term disp-table ehelp saveplace
recentf tree-widget wid-edit paren ob-lisp ob-shell shell org ob
ob-tangle ob-ref ob-lob ob-table ob-exp org-macro org-footnote org-src
ob-comint org-pcomplete pcomplete org-list org-faces org-entities
noutline outline org-version ob-emacs-lisp ob-core ob-eval org-table ol
org-keys org-compat org-macs org-loaddefs format-spec find-func cal-menu
calendar cal-loaddefs rng-nxml rng-valid rng-loc rng-uri rng-parse
nxml-parse rng-match rng-dt rng-util rng-pttrn nxml-ns nxml-mode
nxml-outln nxml-rap sgml-mode dom nxml-util nxml-enc xmltok mb-depth
ffap thingatpt battery dbus xml time which-func imenu parse-time iso8601
time-date cookie1 server diff generic derived easy-mmode dired-x
gh-common marshal eieio-compat info rx finder-inf package browse-url
url-handlers url-parse auth-source password-cache json url-vars cl
ivy-prescient prescient subr-x map edmacro kmacro counsel xdg advice
xref project eieio eieio-core cl-macs eieio-loaddefs dired
dired-loaddefs compile comint ansi-color swiper cl-seq cl-extra
help-mode easymenu seq byte-opt gv bytecomp byte-compile cconv ivy
delsel ring ivy-faces ivy-overlay colir color cl-loaddefs cl-lib tooltip
eldoc electric uniquify ediff-hook vc-hooks lisp-float-type mwheel
term/x-win x-win term/common-win x-dnd tool-bar dnd fontset image
regexp-opt fringe tabulated-list replace newcomment text-mode elisp-mode
lisp-mode prog-mode register page tab-bar menu-bar rfn-eshadow isearch
timer select scroll-bar mouse jit-lock font-lock syntax facemenu
font-core term/tty-colors frame minibuffer cl-generic cham georgian
utf-8-lang misc-lang vietnamese tibetan thai tai-viet lao korean
japanese eucjp-ms cp51932 hebrew greek romanian slovak czech european
ethiopic indian cyrillic chinese composite charscript charprop
case-table epa-hook jka-cmpr-hook help simple abbrev obarray
cl-preloaded nadvice loaddefs button faces cus-face macroexp files
text-properties overlay sha1 md5 base64 format env code-pages mule
custom widget hashtable-print-readable backquote threads dbusbind
inotify lcms2 dynamic-setting system-font-setting font-render-setting
x-toolkit x multi-tty make-network-process emacs)

Memory information:
((conses 16 3591911 2541685)
 (symbols 48 87049 452)
 (strings 32 528119 452566)
 (string-bytes 1 30189681)
 (vectors 16 217149)
 (vector-slots 8 3232842 6057920)
 (floats 8 1637 5252)
 (intervals 56 501483 50429)
 (buffers 992 581))





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43395: 28.0.50; memory leak
  2020-09-12  2:12                                                                                               ` bug#43395: 28.0.50; memory leak Madhu
@ 2020-09-14 15:08                                                                                                 ` Eli Zaretskii
  2020-09-15  1:23                                                                                                   ` Madhu
       [not found]                                                                                                 ` <handler.43395.D43389.161115724232582.notifdone@debbugs.gnu.org>
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-09-14 15:08 UTC (permalink / raw)
  To: Madhu; +Cc: 43395

> From: Madhu <enometh@meer.net>
> Date: Sat, 12 Sep 2020 07:42:42 +0530
> 
> There appears to be a memory leak with emacs RSS growing inordinately in
> size.
> 
> $ ps o pid,rss,drs,sz,share,start_time,vsize,cmd 26285
>   PID   RSS   DRS  SIZE - START    VSZ CMD
> 26285 2643236 2996379 2664940 - Sep09 2998948 /7/gtk/emacs/build-xt-xft/src/emacs --debug-init --daemon
> 
> I usually only notice the leak when it has gone beyond 2G - when linux
> refuses to suspend because I have limited swap.  In most cases emacs
> would be running for a few days.
> 
> The values reported by garbage-collect amount do not reflect the 2GB
> allocation being used by emacs.

Is the GC report below, collected by report-emacs-bug, from the
session whose RSS has grown up to 2GB?  If not, can you post the
output from garbage-collect in that session?

Thanks.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43395: 28.0.50; memory leak
  2020-09-14 15:08                                                                                                 ` Eli Zaretskii
@ 2020-09-15  1:23                                                                                                   ` Madhu
  0 siblings, 0 replies; 166+ messages in thread
From: Madhu @ 2020-09-15  1:23 UTC (permalink / raw)
  To: eliz; +Cc: 43395

*  Eli Zaretskii <eliz@gnu.org> <83y2lc9z11.fsf@gnu.org>
Wrote on Mon, 14 Sep 2020 18:08:26 +0300
> Is the GC report below, collected by report-emacs-bug, from the
> session whose RSS has grown up to 2GB?  If not, can you post the
> output from garbage-collect in that session?

Yes it is from the same offending session and was collected by
report-emacs-bug. (that session is now long gone, "pining for the
fjords")






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time
@ 2020-11-15 14:55 Jean Louis
  2020-11-16 16:11 ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-15 14:55 UTC (permalink / raw)
  To: 44666


Since I wish to find out what is making Emacs slow sometimes, I am
running it with this shell script:

emacs-debug.sh:

#!/bin/bash
## CDPATH I have to unset as otherwise eshell and shell do not work well
unset CDPATH
date >> /home/data1/protected/tmp/emacs-debug
emacs >> /home/data1/protected/tmp/emacs-debug 2>&1

Then if there is non-responsive problem I can do M-x malloc-info

This time computer became totally not responsive:

- using IceWM (rarely happens, almost by rule with EXWM)

- I have not invoked any special function, just small list processing
  where it had 6 elements in total. The non-responsiveness was not
  caused by this function. That is how I say by feeling.

- then I think, by feeling, swapping started or already started during
  my work.

- hardly hardly and with a lot of patience I could invoke M-x
  malloc-info

Fri Nov 13 08:40:17 EAT 2020
Fri Nov 13 19:41:22 EAT 2020
Fri Nov 13 21:51:07 EAT 2020
Fri Nov 13 23:28:16 EAT 2020
Fri Nov 13 23:28:49 EAT 2020
Fri Nov 13 23:41:47 EAT 2020
Fri Nov 13 23:42:35 EAT 2020
Fri Nov 13 23:43:32 EAT 2020
Sat Nov 14 00:22:09 EAT 2020
Sat Nov 14 00:26:32 EAT 2020
Sat Nov 14 11:47:26 EAT 2020
Sat Nov 14 11:59:16 EAT 2020
Sun Nov 15 12:38:28 EAT 2020
<malloc version="1">
<heap nr="0">
<sizes>
							        <size from="49" to="49" total="49" count="1"/>
  <unsorted from="257" to="257" total="257" count="1"/>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="2" size="306"/>
<system type="current" size="11470942208"/>
<system type="max" size="11470942208"/>
<aspace type="total" size="11470942208"/>
<aspace type="mprotect" size="11470942208"/>
</heap>
<heap nr="1">
<sizes>
							        <size from="33" to="48" total="48" count="1"/>
							        <size from="65" to="80" total="80" count="1"/>
</sizes>
<total type="fast" count="2" size="128"/>
<total type="rest" count="0" size="0"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
</heap>
<heap nr="2">
<sizes>
							        <size from="33" to="48" total="48" count="1"/>
							        <size from="65" to="80" total="80" count="1"/>
</sizes>
<total type="fast" count="2" size="128"/>
<total type="rest" count="0" size="0"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
</heap>
<heap nr="3">
<sizes>
							        <size from="33" to="48" total="48" count="1"/>
							        <size from="65" to="80" total="80" count="1"/>
							        <size from="33" to="33" total="33" count="1"/>
</sizes>
<total type="fast" count="2" size="128"/>
<total type="rest" count="1" size="33"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
</heap>
<heap nr="4">
<sizes>
							        <size from="33" to="48" total="48" count="1"/>
							        <size from="65" to="80" total="80" count="1"/>
</sizes>
<total type="fast" count="2" size="128"/>
<total type="rest" count="0" size="0"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
</heap>
<heap nr="5">
<sizes>
							        <size from="33" to="48" total="48" count="1"/>
							        <size from="65" to="80" total="80" count="1"/>
  <unsorted from="2449" to="2449" total="2449" count="1"/>
</sizes>
<total type="fast" count="2" size="128"/>
<total type="rest" count="1" size="2449"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
</heap>
<heap nr="6">
<sizes>
							        <size from="33" to="48" total="48" count="1"/>
							        <size from="65" to="80" total="80" count="1"/>
</sizes>
<total type="fast" count="2" size="128"/>
<total type="rest" count="0" size="0"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
</heap>
<heap nr="7">
<sizes>
							        <size from="17" to="32" total="864" count="27"/>
							        <size from="33" to="48" total="384" count="8"/>
							        <size from="65" to="80" total="160" count="2"/>
							        <size from="97" to="112" total="336" count="3"/>
</sizes>
<total type="fast" count="40" size="1744"/>
<total type="rest" count="0" size="0"/>
<system type="current" size="139264"/>
<system type="max" size="139264"/>
<aspace type="total" size="139264"/>
<aspace type="mprotect" size="139264"/>
</heap>
<heap nr="8">
<sizes>
							        <size from="17" to="32" total="832" count="26"/>
							        <size from="33" to="48" total="240" count="5"/>
							        <size from="65" to="80" total="160" count="2"/>
							        <size from="97" to="112" total="112" count="1"/>
							        <size from="113" to="128" total="128" count="1"/>
							        <size from="49" to="49" total="49" count="1"/>
							        <size from="65" to="65" total="65" count="1"/>
							        <size from="145" to="145" total="145" count="1"/>
							        <size from="193" to="193" total="193" count="1"/>
							        <size from="449" to="449" total="449" count="1"/>
  <unsorted from="2961" to="2961" total="2961" count="1"/>
</sizes>
<total type="fast" count="35" size="1472"/>
<total type="rest" count="6" size="3862"/>
<system type="current" size="139264"/>
<system type="max" size="139264"/>
<aspace type="total" size="139264"/>
<aspace type="mprotect" size="139264"/>
</heap>
<heap nr="9">
<sizes>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="0" size="0"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
</heap>
<total type="fast" count="87" size="3984"/>
<total type="rest" count="10" size="6650"/>
<total type="mmap" count="2" size="5341184"/>
<system type="current" size="11472166912"/>
<system type="max" size="11472166912"/>
<aspace type="total" size="11472166912"/>
<aspace type="mprotect" size="11472166912"/>
</malloc>




In GNU Emacs 28.0.50 (build 1, x86_64-pc-linux-gnu, X toolkit, cairo version 1.14.8, Xaw3d scroll bars)
 of 2020-11-14 built on protected.rcdrun.com
Repository revision: 31f94e4b1c3dc201646ec436d3e2c477f784ed21
Repository branch: master
System Description: Hyperbola GNU/Linux-libre

Configured using:
 'configure --prefix=/package/text/emacs-2020-11-14 --with-modules
 --with-x-toolkit=lucid'

Configured features:
XAW3D XPM JPEG TIFF GIF PNG RSVG CAIRO SOUND GPM DBUS GSETTINGS GLIB
NOTIFY INOTIFY ACL GNUTLS LIBXML2 FREETYPE HARFBUZZ M17N_FLT LIBOTF ZLIB
TOOLKIT_SCROLL_BARS LUCID X11 XDBE XIM MODULES THREADS JSON PDUMPER
LCMS2

Important settings:
  value of $LC_ALL: en_US.UTF-8
  value of $LANG: de_DE.UTF-8
  locale-coding-system: utf-8-unix

Major mode: Lisp Interaction

Minor modes in effect:
  gpm-mouse-mode: t
  tooltip-mode: t
  global-eldoc-mode: t
  eldoc-mode: t
  electric-indent-mode: t
  mouse-wheel-mode: t
  tool-bar-mode: t
  menu-bar-mode: t
  file-name-shadow-mode: t
  global-font-lock-mode: t
  font-lock-mode: t
  auto-composition-mode: t
  auto-encryption-mode: t
  auto-compression-mode: t
  line-number-mode: t
  transient-mark-mode: t

Load-path shadows:
None found.

Features:
(shadow sort hashcash mail-extr emacsbug message rmc puny dired
dired-loaddefs rfc822 mml easymenu mml-sec epa derived epg epg-config
gnus-util rmail rmail-loaddefs auth-source cl-seq eieio eieio-core
cl-macs eieio-loaddefs password-cache json map text-property-search
time-date subr-x seq byte-opt gv bytecomp byte-compile cconv mm-decode
mm-bodies mm-encode mail-parse rfc2231 mailabbrev gmm-utils mailheader
cl-loaddefs cl-lib sendmail rfc2047 rfc2045 ietf-drums mm-util
mail-prsvr mail-utils t-mouse term/linux disp-table tooltip eldoc
electric uniquify ediff-hook vc-hooks lisp-float-type mwheel term/x-win
x-win term/common-win x-dnd tool-bar dnd fontset image regexp-opt fringe
tabulated-list replace newcomment text-mode elisp-mode lisp-mode
prog-mode register page tab-bar menu-bar rfn-eshadow isearch timer
select scroll-bar mouse jit-lock font-lock syntax facemenu font-core
term/tty-colors frame minibuffer cl-generic cham georgian utf-8-lang
misc-lang vietnamese tibetan thai tai-viet lao korean japanese eucjp-ms
cp51932 hebrew greek romanian slovak czech european ethiopic indian
cyrillic chinese composite charscript charprop case-table epa-hook
jka-cmpr-hook help simple abbrev obarray cl-preloaded nadvice button
loaddefs faces cus-face macroexp files window text-properties overlay
sha1 md5 base64 format env code-pages mule custom widget
hashtable-print-readable backquote threads dbusbind inotify lcms2
dynamic-setting system-font-setting font-render-setting cairo x-toolkit
x multi-tty make-network-process emacs)

Memory information:
((conses 16 52575 6366)
 (symbols 48 7259 1)
 (strings 32 18937 1368)
 (string-bytes 1 616804)
 (vectors 16 8986)
 (vector-slots 8 116851 8619)
 (floats 8 22 260)
 (intervals 56 196 0)
 (buffers 992 11))

-- 
Thanks,
Jean Louis
⎔ λ 🄯 𝍄 𝌡 𝌚





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time
  2020-11-15 14:55 bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time Jean Louis
@ 2020-11-16 16:11 ` Eli Zaretskii
  2020-11-16 16:17   ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-16 16:11 UTC (permalink / raw)
  To: Jean Louis; +Cc: 44666

> From: Jean Louis <bugs@gnu.support>
> Date: Sun, 15 Nov 2020 17:55:09 +0300
> 
> Sun Nov 15 12:38:28 EAT 2020
> <malloc version="1">
> <heap nr="0">
> <sizes>
> 							        <size from="49" to="49" total="49" count="1"/>
>   <unsorted from="257" to="257" total="257" count="1"/>
> </sizes>
> <total type="fast" count="0" size="0"/>
> <total type="rest" count="2" size="306"/>
> <system type="current" size="11470942208"/>
> <system type="max" size="11470942208"/>
> <aspace type="total" size="11470942208"/>
> <aspace type="mprotect" size="11470942208"/>
> </heap>

This basically says you have 11GB in the heap, but there are no
details.  So I'm not sure how this could help us make any progress.

Thanks.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time
  2020-11-16 16:11 ` Eli Zaretskii
@ 2020-11-16 16:17   ` Jean Louis
  2020-11-17 15:04     ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-16 16:17 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: 44666

* Eli Zaretskii <eliz@gnu.org> [2020-11-16 19:12]:
> > From: Jean Louis <bugs@gnu.support>
> > Date: Sun, 15 Nov 2020 17:55:09 +0300
> > 
> > Sun Nov 15 12:38:28 EAT 2020
> > <malloc version="1">
> > <heap nr="0">
> > <sizes>
> > 							        <size from="49" to="49" total="49" count="1"/>
> >   <unsorted from="257" to="257" total="257" count="1"/>
> > </sizes>
> > <total type="fast" count="0" size="0"/>
> > <total type="rest" count="2" size="306"/>
> > <system type="current" size="11470942208"/>
> > <system type="max" size="11470942208"/>
> > <aspace type="total" size="11470942208"/>
> > <aspace type="mprotect" size="11470942208"/>
> > </heap>
> 
> This basically says you have 11GB in the heap, but there are no
> details.  So I'm not sure how this could help us make any progress.

I was thinking that command would tell you something.

There was nothing special. I have 4 GB memory and 8 GB swap. There was
no special program running, just XTerm and Emacs.

I would like to find out why is Emacs taking that memory, but I am
unable.

Now I am running it with ulimit, but I am unsure if that ulimit
command really works as manual pages says it sometimes does not work.

#!/bin/bash
unset CDPATH
ulimit -m 3145728
date >> /home/data1/protected/tmp/emacs-debug
emacs >> /home/data1/protected/tmp/emacs-debug 2>&1

If there is nothing to be done with this bug, we can close.

You could suggest me on what to put attention to find out what is
going on.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time
  2020-11-16 16:17   ` Jean Louis
@ 2020-11-17 15:04     ` Eli Zaretskii
  2020-11-19  6:59       ` Jean Louis
  2020-11-19  7:43       ` bug#44666: 28.0.50; malloc-info: Emacs became not responsive, " Jean Louis
  0 siblings, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-17 15:04 UTC (permalink / raw)
  To: Jean Louis; +Cc: 44666

> Date: Mon, 16 Nov 2020 19:17:35 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: 44666@debbugs.gnu.org
> 
> * Eli Zaretskii <eliz@gnu.org> [2020-11-16 19:12]:
> > > From: Jean Louis <bugs@gnu.support>
> > > Date: Sun, 15 Nov 2020 17:55:09 +0300
> > > 
> > > Sun Nov 15 12:38:28 EAT 2020
> > > <malloc version="1">
> > > <heap nr="0">
> > > <sizes>
> > > 							        <size from="49" to="49" total="49" count="1"/>
> > >   <unsorted from="257" to="257" total="257" count="1"/>
> > > </sizes>
> > > <total type="fast" count="0" size="0"/>
> > > <total type="rest" count="2" size="306"/>
> > > <system type="current" size="11470942208"/>
> > > <system type="max" size="11470942208"/>
> > > <aspace type="total" size="11470942208"/>
> > > <aspace type="mprotect" size="11470942208"/>
> > > </heap>
> > 
> > This basically says you have 11GB in the heap, but there are no
> > details.  So I'm not sure how this could help us make any progress.
> 
> I was thinking that command would tell you something.

It tells something, I just don't yet know what that is.

> If there is nothing to be done with this bug, we can close.

No, closing is premature.  I've merged this bug with 3 other similar
ones, and we are discussing this issue with glibc malloc experts.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time
  2020-11-17 15:04     ` Eli Zaretskii
@ 2020-11-19  6:59       ` Jean Louis
  2020-11-19 14:37         ` bug#43389: 28.0.50; Emacs memory leaks " Eli Zaretskii
  2020-11-19  7:43       ` bug#44666: 28.0.50; malloc-info: Emacs became not responsive, " Jean Louis
  1 sibling, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-19  6:59 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: 44666

* Eli Zaretskii <eliz@gnu.org> [2020-11-17 10:04]:
> > If there is nothing to be done with this bug, we can close.
> 
> No, closing is premature.  I've merged this bug with 3 other similar
> ones, and we are discussing this issue with glibc malloc experts.

If bug is merged, do I just reply on this email?

My emacs-uptime now is 19 hours, and I can see 4819 MB swapping
according  to symon-mode

I have not get number of buffers, I tried to delete it and there is no
change. User processes are below. I have not finished this session and
so I am prematurely sending the file 
emacs.strace-2020-11-18-14:42:59-Wednesday which may be accessed here
below on the link. I could not copy the file fully through eshell probably
because if I do copy through eshell the strace becomes longer and
longer and copy never finishes. So I have aborted the copy, file may
not be complete. It is also not complete for reason that session is
not finished.

strace is here, 13M download, when unpacked it is more than 1.2 GB.
https://gnu.support/files/tmp/emacs.strace-2020-11-18-14:42:59-Wednesday.lz

When finishing this email swapping reported is 4987 MB and I know by
experience it will come to system being not usable.

              total        used        free      shared  buff/cache   available
Mem:        3844508     3575720      119476       37576      149312       55712
Swap:       8388604     4820656     3567948

$ htop shows

8399 VIRT memory for emacs and	3211M RES memory for emacs

  admin 30586  4.5 88.1 Nov 18 50:52 emacs
  admin 30584  0.9  0.0 Nov 18 10:20 strace -o emacs.strace-2020-11-18-14:42:59-Wednesday emacs
  admin  5542  0.1  0.1 Nov 17 02:13 icewm --notify
  admin 15914  0.0  0.4  07:26 00:02 mutt
  admin  5584  0.0  0.0 Nov 17 00:09 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
  admin 17639  0.0  0.0  09:42 00:00 emacsclient -c /home/data1/protected/tmp/mutt-protected-1001-15914-94772654077392443
  admin  8410  0.0  0.0 Nov 18 00:05 /usr/lib/at-spi2-core/at-spi2-registryd --use-gnome-session
  admin 17023  0.0  0.1  08:35 00:00 /bin/bash --noediting -i
  admin 21322  0.0  0.0 Nov 18 00:00 /usr/bin/festival
  admin 28366  0.0  0.0 Nov 18 00:00 /bin/bash
  admin  8408  0.0  0.0 Nov 18 00:00 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
  admin  5541  0.0  0.0 Nov 17 00:00 icewmbg
  admin 28038  0.0  0.0 Nov 18 00:00 /usr/lib/dconf/dconf-service
  admin  8429  0.0  0.0 Nov 18 00:00 /usr/lib/GConf/gconfd-2
  admin 29399  0.0  0.0  00:18 00:00 /usr/local/bin/psql -U maddox -h localhost -P pager=off rcdbusiness
  admin  5426  0.0  0.0 Nov 17 00:00 -bash
  admin 14932  0.0  0.0 Nov 18 00:00 /usr/bin/aspell -a -m -d en --encoding=utf-8
  admin  8403  0.0  0.0 Nov 18 00:00 /usr/lib/at-spi2-core/at-spi-bus-launcher
  admin  5501  0.0  0.0 Nov 17 00:00 /bin/sh /usr/bin/startx
  admin  5523  0.0  0.0 Nov 17 00:00 xinit /home/data1/protected/.xinitrc -- /etc/X11/xinit/xserverrc :0 vt1 -keeptty -auth /tmp/serverauth.Tvh06SZQdP
  admin  5528  0.0  0.0 Nov 17 00:00 sh /home/data1/protected/.xinitrc
  admin  5540  0.0  0.0 Nov 17 00:00 icewm-session
  admin  5579  0.0  0.0 Nov 17 00:00 dbus-launch --autolaunch=9459754a0df54d1465edf14d5b0bfe99 --binary-syntax --close-stderr
  admin 30582  0.0  0.0 Nov 18 00:00 /bin/bash /home/data1/protected/bin/emacs-debug.sh






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time
  2020-11-17 15:04     ` Eli Zaretskii
  2020-11-19  6:59       ` Jean Louis
@ 2020-11-19  7:43       ` Jean Louis
  1 sibling, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-19  7:43 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: 44666

* Eli Zaretskii <eliz@gnu.org> [2020-11-17 10:04]:
> No, closing is premature.  I've merged this bug with 3 other similar
> ones, and we are discussing this issue with glibc malloc experts.

I have now finished the session as it became unbearable. I could not
switch from one Window Manager workspace to other WM
workspace. Swapping grew over 5.3 GB.

After finishing session memory usage came back to normal and I can
start new session.

The link for strace file that I have sent in the previous email has
been updated and is now finished as session has been finished. 





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-19  6:59       ` Jean Louis
@ 2020-11-19 14:37         ` Eli Zaretskii
  2020-11-20  3:16           ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-19 14:37 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

> Date: Thu, 19 Nov 2020 09:59:44 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: 44666@debbugs.gnu.org
> 
> * Eli Zaretskii <eliz@gnu.org> [2020-11-17 10:04]:
> > > If there is nothing to be done with this bug, we can close.
> > 
> > No, closing is premature.  I've merged this bug with 3 other similar
> > ones, and we are discussing this issue with glibc malloc experts.
> 
> If bug is merged, do I just reply on this email?

No, it's better to reply to bug#43389 (I've redirected the discussion
now), and please keep the other addressees on the CC list, as they are
not subscribed to the bug list, I believe.

> My emacs-uptime now is 19 hours, and I can see 4819 MB swapping
> according  to symon-mode
> 
> I have not get number of buffers, I tried to delete it and there is no
> change. User processes are below. I have not finished this session and
> so I am prematurely sending the file 
> emacs.strace-2020-11-18-14:42:59-Wednesday which may be accessed here
> below on the link. I could not copy the file fully through eshell probably
> because if I do copy through eshell the strace becomes longer and
> longer and copy never finishes. So I have aborted the copy, file may
> not be complete. It is also not complete for reason that session is
> not finished.
> 
> strace is here, 13M download, when unpacked it is more than 1.2 GB.
> https://gnu.support/files/tmp/emacs.strace-2020-11-18-14:42:59-Wednesday.lz

I've looked at that file, but couldn't see any smoking guns.  It shows
that your brk goes up and up and up until it reaches more than 7GB.
Some of the requests come in groups, totaling about 5MB, not sure why
(these groups always follow a call to timerfd_settime, which seems to
hint that we are setting an atimer for something).  However, without
time stamps for each syscall, it is hard to tell whether these series
of calls to 'brk' are indeed made one after the other, nor whether
they are indeed related to something we use atimers for, because it is
unknown how much time passed between these calls.

I think you should try using the malloc tracing tools pointed to here:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

Also, next time your vsize is several GBytes, please see how much do
your buffers take, by evaluating this form:

 (let ((size 0))
   (dolist (buffer (buffer-list) size)
     (setq size (+ size (buffer-size buffer)))))






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-19 14:37         ` bug#43389: 28.0.50; Emacs memory leaks " Eli Zaretskii
@ 2020-11-20  3:16           ` Jean Louis
  2020-11-20  8:10             ` Eli Zaretskii
  2020-11-23  3:35             ` Carlos O'Donell
  0 siblings, 2 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-20  3:16 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Eli Zaretskii <eliz@gnu.org> [2020-11-19 17:38]:
> I think you should try using the malloc tracing tools pointed to here:
> 
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

When running for long time Emacs will crush at certain point of time
as my hard disk get full as /tmp is just about 2 gigabytes. I did not
understand Carlos how to change the location for files.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-20  3:16           ` Jean Louis
@ 2020-11-20  8:10             ` Eli Zaretskii
  2020-11-22 19:52               ` Jean Louis
  2020-11-23 10:59               ` Jean Louis
  2020-11-23  3:35             ` Carlos O'Donell
  1 sibling, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-20  8:10 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

> Date: Fri, 20 Nov 2020 06:16:26 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
> 
> * Eli Zaretskii <eliz@gnu.org> [2020-11-19 17:38]:
> > I think you should try using the malloc tracing tools pointed to here:
> > 
> >   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> 
> When running for long time Emacs will crush at certain point of time
> as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> understand Carlos how to change the location for files.

Carlos, could you please help Jean to direct the traces to a place
other than /tmp?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-20  8:10             ` Eli Zaretskii
@ 2020-11-22 19:52               ` Jean Louis
  2020-11-22 20:16                 ` Eli Zaretskii
  2020-11-23 10:59               ` Jean Louis
  1 sibling, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-22 19:52 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Eli Zaretskii <eliz@gnu.org> [2020-11-20 03:11]:
> > Date: Fri, 20 Nov 2020 06:16:26 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
> >   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
> > 
> > * Eli Zaretskii <eliz@gnu.org> [2020-11-19 17:38]:
> > > I think you should try using the malloc tracing tools pointed to here:
> > > 
> > >   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> > 
> > When running for long time Emacs will crush at certain point of time
> > as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> > understand Carlos how to change the location for files.
> 
> Carlos, could you please help Jean to direct the traces to a place
> other than /tmp?

I am now following this strategy here:
https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking

I have run emacs -Q for very short time, with:

MALLOC_CONF=prof_leak:true,lg_prof_sample:0,prof_final:true \
LD_PRELOAD=/package/lib/jemalloc/lib/libjemalloc.so.2 emacs -Q

and there are PDF files generated. I also wish to mention that I use 2
dynamic modules, one is emacs-libpq and other emacs-libvterm if that
influences overall. 

You may know easier how to interpret those files and may spot
something. This Emacs session was running just a minute or something. 

https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26915.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26915.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26918.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26918.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26921.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26921.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26922.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26922.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26923.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26923.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26924.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26924.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26925.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26925.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26931.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26931.0.f.heap.pdf

I am now running new session and will have maybe quite different data
after hours of run.

Jean





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-22 19:52               ` Jean Louis
@ 2020-11-22 20:16                 ` Eli Zaretskii
  2020-11-23  3:41                   ` Carlos O'Donell
                                     ` (2 more replies)
  0 siblings, 3 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-22 20:16 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

> Date: Sun, 22 Nov 2020 22:52:14 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
> 
> I am now following this strategy here:
> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking

That uses a different implementation of malloc, so I'm not sure it
will help us.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-20  3:16           ` Jean Louis
  2020-11-20  8:10             ` Eli Zaretskii
@ 2020-11-23  3:35             ` Carlos O'Donell
  2020-11-23 11:07               ` Jean Louis
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-23  3:35 UTC (permalink / raw)
  To: Jean Louis, Eli Zaretskii; +Cc: fweimer, 43389, trevor, dj, michael_heerdegen

On 11/19/20 10:16 PM, Jean Louis wrote:
> * Eli Zaretskii <eliz@gnu.org> [2020-11-19 17:38]:
>> I think you should try using the malloc tracing tools pointed to here:
>>
>>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> 
> When running for long time Emacs will crush at certain point of time
> as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> understand Carlos how to change the location for files.
 
The glibc malloc tracer functionality can be adjusted with environment
variables.

Example:

MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=./ls.mtr LD_PRELOAD=./libmtrace.so ls
mtrace: writing to ./ls.mtr.350802

The appended PID helps keep the files distinct (and includes a sequence
number in the event of conflict).

In the above example the use of MTRACE_CTL_FILE=./ls.mtr instructs the
tracer to write the trace file to the current directory.

The tracer appends the PID of the traced process to the ls.mtr file name
(and a sequence number that increases monotonically in the event of a
name conflict).

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-22 20:16                 ` Eli Zaretskii
@ 2020-11-23  3:41                   ` Carlos O'Donell
  2020-11-23  8:11                   ` Jean Louis
  2020-11-23 13:27                   ` Jean Louis
  2 siblings, 0 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-23  3:41 UTC (permalink / raw)
  To: Eli Zaretskii, Jean Louis; +Cc: fweimer, 43389, trevor, dj, michael_heerdegen

On 11/22/20 3:16 PM, Eli Zaretskii wrote:
>> Date: Sun, 22 Nov 2020 22:52:14 +0300
>> From: Jean Louis <bugs@gnu.support>
>> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
>>
>> I am now following this strategy here:
>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> 
> That uses a different implementation of malloc, so I'm not sure it
> will help us.

Correct, that is a different malloc implementation and may have
completely different behaviour for your given workload. That is
not to say that it isn't viable solution to try another allocator
that matches your workload. However, in this bug we're trying to
determine why the "default" configuration of emacs and glibc's
allocator causes memory usage to grow.

We want to run the glibc malloc algorithms because that is the
implementation under which we are observing the increased memory
pressure. The tracer I've suggested will get us an API trace
that we can use to determine if it is actually API calls that
are causing an increase in the memory usage or if it's an
algorithmic issue. It is not always obvious to see from the
API calls, but having the trace is better than not.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-22 20:16                 ` Eli Zaretskii
  2020-11-23  3:41                   ` Carlos O'Donell
@ 2020-11-23  8:11                   ` Jean Louis
  2020-11-23  9:59                     ` Eli Zaretskii
  2020-11-23 13:27                   ` Jean Louis
  2 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-23  8:11 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Eli Zaretskii <eliz@gnu.org> [2020-11-22 23:17]:
> > Date: Sun, 22 Nov 2020 22:52:14 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
> >   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
> > 
> > I am now following this strategy here:
> > https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> 
> That uses a different implementation of malloc, so I'm not sure it
> will help us.

It will not help if you are able to interpret the PDF reports and you
do not see anything helpful. If you do interpret those PDF reports
please tell me as such could be useful to find possible causes or find
other issues in Emacs.

Does this here tells you anything?
https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap.pdf

Does this add module isra.0 inside tells you anything?
https://gnu.support/files/tmp/2020-11-22/jeprof.26922.0.f.heap.pdf

I am using dynamic modules like vterm and libpq, can that influence
memory or create memory leaks?

What is tst_post_reentrancy_raw, is that something that eats memory?

I am still running this session with jemalloc and I wish to see if
anything will happen that blocks the work similar how it blocks with
the normal run. This helps slightly in determination. As if run of
Emacs with jemalloc does not cause problems one time, maybe 2-5 times
or 10 times, that may be deduce problem to standard malloc and not
Emacs.

Then in the next session I will try again the tools as described and
submit data.

To help me understand, do you think problem is in Emacs or in glibc
malloc?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23  8:11                   ` Jean Louis
@ 2020-11-23  9:59                     ` Eli Zaretskii
  2020-11-23 17:19                       ` Arthur Miller
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23  9:59 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

On November 23, 2020 10:11:22 AM GMT+02:00, Jean Louis <bugs@gnu.support> wrote:
> * Eli Zaretskii <eliz@gnu.org> [2020-11-22 23:17]:
> > > Date: Sun, 22 Nov 2020 22:52:14 +0300
> > > From: Jean Louis <bugs@gnu.support>
> > > Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
> > >   michael_heerdegen@web.de, trevor@trevorbentley.com,
> carlos@redhat.com
> > > 
> > > I am now following this strategy here:
> > >
> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> > 
> > That uses a different implementation of malloc, so I'm not sure it
> > will help us.
> 
> It will not help if you are able to interpret the PDF reports and you
> do not see anything helpful. If you do interpret those PDF reports
> please tell me as such could be useful to find possible causes or find
> other issues in Emacs.

Granted, I looked at the reports before writing that response.  I don't see there anything related to Emacs code.

> Does this here tells you anything?
> https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap.pdf

It says that most of memory was allocated by a subroutine of jemalloc.  As I'm not familiar with how jemalloc works, I see no way for us to draw any significant conclusions from that.

> Does this add module isra.0 inside tells you anything?

AFAIU, it's some internal jemalloc midule.

> I am using dynamic modules like vterm and libpq, can that influence
> memory or create memory leaks?

I have no idea, but I don't think I see any of their functions in these reports.

> What is tst_post_reentrancy_raw, is that something that eats memory?

I don't know.  It's something internal to jemalloc.

> I am still running this session with jemalloc and I wish to see if
> anything will happen that blocks the work similar how it blocks with
> the normal run. This helps slightly in determination. As if run of
> Emacs with jemalloc does not cause problems one time, maybe 2-5 times
> or 10 times, that may be deduce problem to standard malloc and not
> Emacs.

The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had such a glaring memory leak.  So trying different malloc implementations is from my POV waste of time at this stage.

> Then in the next session I will try again the tools as described and
> submit data.
> 
> To help me understand, do you think problem is in Emacs or in glibc
> malloc?

I suspect the problem is in how we use glibc's malloc -- there are some usage patterns that cause glibc to be suboptimal in its memory usage, and I hope we will find ways to fine tune it to our needs.

But that is just a guess, and so I wish you'd use the tools pointed out by Carlos, because they are the most efficient way of collecting evidence that might allow us to make some progress here.

We have the attention of the best experts on the issue; let's use their attention and their time as best as we possibly can.

TIA






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-20  8:10             ` Eli Zaretskii
  2020-11-22 19:52               ` Jean Louis
@ 2020-11-23 10:59               ` Jean Louis
  2020-11-23 15:46                 ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-23 10:59 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

The session I was running with jemalloc memory leak logging is
finished now. 

Just the same thing happened. It started getting slower and slower. 

In the IceWM window manager I have visual representation of memory
usage and that is how I get feeling, there is also tooltip telling me
that more and more memory is used. When it starts to swap like 3 GB
then I turn on symon-mode and in Emacs I see more and more swapping.

The heap file is here 24M, maybe not needed for review:
https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap

Visualization is here 20K PDF file:
https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap.pdf

Do you see anything interesting inside that should tell about memory leaks?

Jean






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23  3:35             ` Carlos O'Donell
@ 2020-11-23 11:07               ` Jean Louis
  0 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 11:07 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor

* Carlos O'Donell <carlos@redhat.com> [2020-11-23 06:35]:
> On 11/19/20 10:16 PM, Jean Louis wrote:
> > * Eli Zaretskii <eliz@gnu.org> [2020-11-19 17:38]:
> >> I think you should try using the malloc tracing tools pointed to here:
> >>
> >>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> > 
> > When running for long time Emacs will crush at certain point of time
> > as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> > understand Carlos how to change the location for files.
>  
> The glibc malloc tracer functionality can be adjusted with environment
> variables.
> 
> Example:
> 
> MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=./ls.mtr LD_PRELOAD=./libmtrace.so ls
> mtrace: writing to ./ls.mtr.350802
> 
> The appended PID helps keep the files distinct (and includes a sequence
> number in the event of conflict).

Alright, thank you.

My session started with it.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-22 20:16                 ` Eli Zaretskii
  2020-11-23  3:41                   ` Carlos O'Donell
  2020-11-23  8:11                   ` Jean Louis
@ 2020-11-23 13:27                   ` Jean Louis
  2020-11-23 15:54                     ` Carlos O'Donell
  2020-11-23 19:50                     ` Carlos O'Donell
  2 siblings, 2 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 13:27 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Eli Zaretskii <eliz@gnu.org> [2020-11-22 23:17]:
> > Date: Sun, 22 Nov 2020 22:52:14 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
> >   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
> > 
> > I am now following this strategy here:
> > https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> 
> That uses a different implementation of malloc, so I'm not sure it
> will help us.

This is how I have run the shorter Emacs session until it got blocked:

MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1

And here is mtrace:

https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz

I cannot run Emacs that way as something happens and Emacs get
blocked. Problem arrives with M-s M-w to search for anything on
Internet with eww. Anything blocks. And I get message:

error in process filter: Quit

after that C-g does not work, I cannot kill buffer, I cannot save the
current work or other buffers, I cannot switch from buffer to buffer
neither open any menu.

Debugging requires longer run sessions and actual work in Emacs.

This happens all the time when I run Emacs like the above example
command.

Unless there is safer way for debugging the above one is not
functional as it blocks everything and I do use incidentally or
accidentally eww in the work.

I hope that something will be visible from that mtrace.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 10:59               ` Jean Louis
@ 2020-11-23 15:46                 ` Eli Zaretskii
  2020-11-23 17:29                   ` Arthur Miller
                                     ` (2 more replies)
  0 siblings, 3 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 15:46 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

> Date: Mon, 23 Nov 2020 13:59:47 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
> 
> In the IceWM window manager I have visual representation of memory
> usage and that is how I get feeling, there is also tooltip telling me
> that more and more memory is used. When it starts to swap like 3 GB
> then I turn on symon-mode and in Emacs I see more and more swapping.

I think I described how to write an Emacs function that you could use
to watch the vsize of the Emacs process and alert you to it being
above some threshold.

> The heap file is here 24M, maybe not needed for review:
> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap
> 
> Visualization is here 20K PDF file:
> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap.pdf
> 
> Do you see anything interesting inside that should tell about memory leaks?

I'm not sure.  I think I see that you have some timer that triggers a
lot of memory allocations because it conses a lot of Lisp objects.
Whether that is part of the problem or not is not clear.

Next time when your session causes the system to swap, please type:

  M-: (garbage-collect) RET

and post here the output of that (it should be a list of numbers
whose meanings are explained in the doc string of garbage-collect).

Also, I think I asked to tell how large are your buffers by evaluation
the following (again, near the point where your session causes the
system to page heavily):

  (let ((size 0))
    (dolist (buffer (buffer-list) size)
      (setq size (+ size (buffer-size buffer)))))

It is important to have both these pieces of information from the same
session at the same time near the point where you must kill Emacs, so
that we know how much memory is actually used by your session at that
point (as opposed to memory that is "free" in the heap, but was  not
returned to the OS).

Thanks.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 13:27                   ` Jean Louis
@ 2020-11-23 15:54                     ` Carlos O'Donell
  2020-11-23 18:58                       ` Jean Louis
  2020-11-23 19:50                     ` Carlos O'Donell
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-23 15:54 UTC (permalink / raw)
  To: Jean Louis, Eli Zaretskii; +Cc: fweimer, 43389, trevor, dj, michael_heerdegen

On 11/23/20 8:27 AM, Jean Louis wrote:
> * Eli Zaretskii <eliz@gnu.org> [2020-11-22 23:17]:
>>> Date: Sun, 22 Nov 2020 22:52:14 +0300
>>> From: Jean Louis <bugs@gnu.support>
>>> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>>>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
>>>
>>> I am now following this strategy here:
>>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
>>
>> That uses a different implementation of malloc, so I'm not sure it
>> will help us.
> 
> This is how I have run the shorter Emacs session until it got blocked:
> 
> MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> And here is mtrace:
> 
> https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
> 
> I cannot run Emacs that way as something happens and Emacs get
> blocked. Problem arrives with M-s M-w to search for anything on
> Internet with eww. Anything blocks. And I get message:
> 
> error in process filter: Quit

Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
which may affect the process if using pipes.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23  9:59                     ` Eli Zaretskii
@ 2020-11-23 17:19                       ` Arthur Miller
  2020-11-23 17:44                         ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 17:19 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, Jean Louis, dj, michael_heerdegen, trevor, carlos

> The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had
> such a glaring memory leak.

This has to be something introduced fairly recently, right?

I didn't have any such problems before, but since maybe few weeks ago, I
have also experienced heavy lockdowns of my entire OS. To the point
where entire X11 got unresposnsive, when it happens I can't even switch
to terminal to kill Emacs. What I do is Alt-Shift to another virtual
linux console. I don't even need to log into the system in that console,
I can then Alt-Shift 1 to go back to one I am logged into, and
everything is normal. Emacs is every time restarted by systemd and
everything is repsonsive and working as normal. 

This started sometime ago; and I have noticed that it happens when I was
cleaning my disk and reading big directories in Dired (I have some with
~7k-10k files in them). I was using Helm to complete paths when I was
shifting fiels and folders around. After maybe hour or so I would
experience big slowdown. I don't have swap file on my system enabled at
all, so I am not sure what was going, but I didn't have time to
participate in this memory leak thing yet. I haven't experienced any
problems since I recompiled Emacs last time, which was in 18th (last
Wendesday). I have recompiled without Gtk this time, but I have no idea
if it has anything to do with the issue, was just a wild shot to see if
things are better.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 15:46                 ` Eli Zaretskii
@ 2020-11-23 17:29                   ` Arthur Miller
  2020-11-23 17:45                     ` Eli Zaretskii
  2020-11-23 18:33                   ` Jean Louis
  2020-11-23 21:30                   ` Trevor Bentley
  2 siblings, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 17:29 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, Jean Louis, dj, michael_heerdegen, trevor, carlos

Eli Zaretskii <eliz@gnu.org> writes:

>> Date: Mon, 23 Nov 2020 13:59:47 +0300
>> From: Jean Louis <bugs@gnu.support>
>> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
>> 
>> In the IceWM window manager I have visual representation of memory
>> usage and that is how I get feeling, there is also tooltip telling me
>> that more and more memory is used. When it starts to swap like 3 GB
>> then I turn on symon-mode and in Emacs I see more and more swapping.
>
> I think I described how to write an Emacs function that you could use
> to watch the vsize of the Emacs process and alert you to it being
> above some threshold.
>
>> The heap file is here 24M, maybe not needed for review:
>> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap
>> 
>> Visualization is here 20K PDF file:
>> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap.pdf
>> 
>> Do you see anything interesting inside that should tell about memory leaks?
>
> I'm not sure.  I think I see that you have some timer that triggers a
> lot of memory allocations because it conses a lot of Lisp objects.
> Whether that is part of the problem or not is not clear.
>
> Next time when your session causes the system to swap, please type:
>
>   M-: (garbage-collect) RET
>
> and post here the output of that (it should be a list of numbers
> whose meanings are explained in the doc string of garbage-collect).
>
> Also, I think I asked to tell how large are your buffers by evaluation
> the following (again, near the point where your session causes the
> system to page heavily):
>
>   (let ((size 0))
>     (dolist (buffer (buffer-list) size)
>       (setq size (+ size (buffer-size buffer)))))
>
> It is important to have both these pieces of information from the same
> session at the same time near the point where you must kill Emacs, so
> that we know how much memory is actually used by your session at that
> point (as opposed to memory that is "free" in the heap, but was  not
> returned to the OS).
>
> Thanks.
For me it happends like really, really fast. Things work normally, and
then suddenly everythign freezes, and after first freeze, it takes for
every to see result of any keypress. For example video in Firefox gets
slow down to like a frame per minut or so; I can see that system is
alive, but it is impossible to type something like (garbage-collect) and
see the result; I would be sitting here for a day :-). 

The only thing I can is switch to another console, and then back. By
that time Emacs process is restarted and everything is normal. I don't
use swap file at all, and I can't believe that Emacs is eating up 32 gig
or RAM either. However I can't type any command to see what it is
peeking at since everything is efefctively frozen. I have seen it at 800
meg on my machine at some time, but it is far away from 32 gig I have.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 17:19                       ` Arthur Miller
@ 2020-11-23 17:44                         ` Eli Zaretskii
  2020-11-23 18:34                           ` Arthur Miller
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 17:44 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

> From: Arthur Miller <arthur.miller@live.com>
> Cc: Jean Louis <bugs@gnu.support>,  fweimer@redhat.com,
>   43389@debbugs.gnu.org,  dj@redhat.com,  michael_heerdegen@web.de,
>   trevor@trevorbentley.com,  carlos@redhat.com
> Date: Mon, 23 Nov 2020 18:19:32 +0100
> 
> > The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had
> > such a glaring memory leak.
> 
> This has to be something introduced fairly recently, right?

Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
somewhat differently, and OTOH glibc removed some of the malloc hooks
we used to use in versions of Emacs before 26.  In addition, glibc is
also being developed, and maybe some change there somehow triggered
this.

As you see, there's more than one factor that could possibly be
related.

> I didn't have any such problems before, but since maybe few weeks ago, I
> have also experienced heavy lockdowns of my entire OS. To the point
> where entire X11 got unresposnsive, when it happens I can't even switch
> to terminal to kill Emacs. What I do is Alt-Shift to another virtual
> linux console. I don't even need to log into the system in that console,
> I can then Alt-Shift 1 to go back to one I am logged into, and
> everything is normal. Emacs is every time restarted by systemd and
> everything is repsonsive and working as normal. 
> 
> This started sometime ago; and I have noticed that it happens when I was
> cleaning my disk and reading big directories in Dired (I have some with
> ~7k-10k files in them). I was using Helm to complete paths when I was
> shifting fiels and folders around. After maybe hour or so I would
> experience big slowdown. I don't have swap file on my system enabled at
> all, so I am not sure what was going, but I didn't have time to
> participate in this memory leak thing yet. I haven't experienced any
> problems since I recompiled Emacs last time, which was in 18th (last
> Wendesday). I have recompiled without Gtk this time, but I have no idea
> if it has anything to do with the issue, was just a wild shot to see if
> things are better.

If the problem is memory, I suggest to look at the system log to see
if there are any signs of that.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 17:29                   ` Arthur Miller
@ 2020-11-23 17:45                     ` Eli Zaretskii
  2020-11-23 18:40                       ` Arthur Miller
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 17:45 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

> From: Arthur Miller <arthur.miller@live.com>
> Cc: Jean Louis <bugs@gnu.support>,  fweimer@redhat.com,
>   43389@debbugs.gnu.org,  dj@redhat.com,  michael_heerdegen@web.de,
>   trevor@trevorbentley.com,  carlos@redhat.com
> Date: Mon, 23 Nov 2020 18:29:40 +0100
> 
> For me it happends like really, really fast. Things work normally, and
> then suddenly everythign freezes, and after first freeze, it takes for
> every to see result of any keypress. For example video in Firefox gets
> slow down to like a frame per minut or so; I can see that system is
> alive, but it is impossible to type something like (garbage-collect) and
> see the result; I would be sitting here for a day :-). 

That doesn't sound like a memory problem to me.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 15:46                 ` Eli Zaretskii
  2020-11-23 17:29                   ` Arthur Miller
@ 2020-11-23 18:33                   ` Jean Louis
  2020-11-23 21:30                   ` Trevor Bentley
  2 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 18:33 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Eli Zaretskii <eliz@gnu.org> [2020-11-23 18:46]:
> I think I described how to write an Emacs function that you could use
> to watch the vsize of the Emacs process and alert you to it being
> above some threshold.

Yes I will do.

I will use this to inform you:

(defun good-bye ()
  (interactive)
  (let* ((garbage (garbage-collect))
	 (size 0)
	 (buffers-size (dolist (buffer (buffer-list) size)
			(setq size (+ size (buffer-size buffer)))))
	 (uptime (emacs-uptime))
	 (pid (emacs-pid))
	 (file (format "~/tmp/emacs-session-%s.el" pid))
	 (list (list (list 'uptime uptime) (list 'pid pid)
		     (list 'garbage garbage) (list 'buffers-size buffers-size))))
    (with-temp-file file
      (insert (prin1-to-string list)))
    (message file)))





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 17:44                         ` Eli Zaretskii
@ 2020-11-23 18:34                           ` Arthur Miller
  2020-11-23 19:06                             ` Jean Louis
  2020-11-23 19:15                             ` Eli Zaretskii
  0 siblings, 2 replies; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 18:34 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

[-- Attachment #1: Type: text/plain, Size: 3669 bytes --]

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: Jean Louis <bugs@gnu.support>,  fweimer@redhat.com,
>>   43389@debbugs.gnu.org,  dj@redhat.com,  michael_heerdegen@web.de,
>>   trevor@trevorbentley.com,  carlos@redhat.com
>> Date: Mon, 23 Nov 2020 18:19:32 +0100
>> 
>> > The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had
>> > such a glaring memory leak.
>> 
>> This has to be something introduced fairly recently, right?
>
> Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
> somewhat differently, and OTOH glibc removed some of the malloc hooks
> we used to use in versions of Emacs before 26.  In addition, glibc is
> also being developed, and maybe some change there somehow triggered
> this.
It has past long since v 26, and pdumber as well :-) You know I am
rebuilding all the time and am on relatively latest master so I would
have noticed it earlier, so it must be something since last month or so,
I am not claiming anything exact, but not too far ago.

> As you see, there's more than one factor that could possibly be
> related.
Yeah; I understand that :-). 

>> I didn't have any such problems before, but since maybe few weeks ago, I
>> have also experienced heavy lockdowns of my entire OS. To the point
>> where entire X11 got unresposnsive, when it happens I can't even switch
>> to terminal to kill Emacs. What I do is Alt-Shift to another virtual
>> linux console. I don't even need to log into the system in that console,
>> I can then Alt-Shift 1 to go back to one I am logged into, and
>> everything is normal. Emacs is every time restarted by systemd and
>> everything is repsonsive and working as normal. 
>> 
>> This started sometime ago; and I have noticed that it happens when I was
>> cleaning my disk and reading big directories in Dired (I have some with
>> ~7k-10k files in them). I was using Helm to complete paths when I was
>> shifting fiels and folders around. After maybe hour or so I would
>> experience big slowdown. I don't have swap file on my system enabled at
>> all, so I am not sure what was going, but I didn't have time to
>> participate in this memory leak thing yet. I haven't experienced any
>> problems since I recompiled Emacs last time, which was in 18th (last
>> Wendesday). I have recompiled without Gtk this time, but I have no idea
>> if it has anything to do with the issue, was just a wild shot to see if
>> things are better.
>
> If the problem is memory, I suggest to look at the system log to see
> if there are any signs of that.
Nothing else crashes, and I have 32 gig, so I am not sure what can be a
problem.

It is obvious that Emacs causes the lockdown, but I don't know how.
I am not really sure what to make of the syslog in this case either.

You can take a peek  at the last crash I had (17th last week), if it
tells you anything more then what apps I use :-). I was playing music
with Emacs, so you will see start with pulseaudio, and what happened
untill Emacs restarted. As you see everything is happening in 4 seconds
interval, so it must be the point when I switched to another console
with Alt+Shift. I have no idea why systemd kills Emacs when I do that
either, but I discovered it does so. My intention from the beginning
was to just pkill Emacs, and hoped it was just X11 that was locked, not
entire system, but I discovered that I didn't even needed to kill emacs,
it was already killed by the time I logged into another console and
everything seemed to work nice after switch to other console, so I kept
using it as my workaround since it started; 3 - 4 weeks ago? At least
what I am aware of.


[-- Attachment #2: crash-log.txt --]
[-- Type: text/plain, Size: 13674 bytes --]

nov 17 16:32:44 pascal kernel: pulseaudio invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
nov 17 16:32:44 pascal kernel: CPU: 3 PID: 1520 Comm: pulseaudio Tainted: P           OE     5.9.8-arch1-1 #1
nov 17 16:32:44 pascal kernel: Hardware name: Gigabyte Technology Co., Ltd. Z170X-Gaming 7/Z170X-Gaming 7, BIOS F22f 06/28/2017
nov 17 16:32:44 pascal kernel: Call Trace:
nov 17 16:32:44 pascal kernel:  dump_stack+0x6b/0x83
nov 17 16:32:44 pascal kernel:  dump_header+0x4a/0x1f7
nov 17 16:32:44 pascal kernel:  oom_kill_process.cold+0xb/0x10
nov 17 16:32:44 pascal kernel:  out_of_memory+0x1a9/0x4d0
nov 17 16:32:44 pascal kernel:  __alloc_pages_slowpath.constprop.0+0xc3d/0xd10
nov 17 16:32:44 pascal kernel:  __alloc_pages_nodemask+0x2f2/0x320
nov 17 16:32:44 pascal kernel:  pagecache_get_page+0x14a/0x360
nov 17 16:32:44 pascal kernel:  filemap_fault+0x682/0x8f0
nov 17 16:32:44 pascal kernel:  ext4_filemap_fault+0x2d/0x40 [ext4]
nov 17 16:32:44 pascal kernel:  __do_fault+0x38/0xd0
nov 17 16:32:44 pascal kernel:  handle_mm_fault+0x1542/0x1a40
nov 17 16:32:44 pascal kernel:  do_user_addr_fault+0x1e3/0x420
nov 17 16:32:44 pascal kernel:  exc_page_fault+0x82/0x1c0
nov 17 16:32:44 pascal kernel:  ? asm_exc_page_fault+0x8/0x30
nov 17 16:32:44 pascal kernel:  asm_exc_page_fault+0x1e/0x30
nov 17 16:32:44 pascal kernel: RIP: 0033:0x7f9876da1ce0
nov 17 16:32:44 pascal kernel: Code: Unable to access opcode bytes at RIP 0x7f9876da1cb6.
nov 17 16:32:44 pascal kernel: RSP: 002b:00007ffd6eca1538 EFLAGS: 00010202
nov 17 16:32:44 pascal kernel: RAX: 0000000000000000 RBX: 00007ffd6eca15b0 RCX: 0000000000000000
nov 17 16:32:44 pascal kernel: RDX: 0000000000000086 RSI: 0000000000000000 RDI: 00007ffd6eca15b0
nov 17 16:32:44 pascal kernel: RBP: 000055e18314c4a0 R08: 00007ffd6eca15b0 R09: 00007ffd6eca15b0
nov 17 16:32:44 pascal kernel: R10: 00007ffd6edf3080 R11: 0000000000000293 R12: 0000000000000086
nov 17 16:32:44 pascal kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 00007f9871e2c1e0
nov 17 16:32:44 pascal kernel: Mem-Info:
nov 17 16:32:44 pascal kernel: active_anon:728 inactive_anon:8062765 isolated_anon:0
                                active_file:81 inactive_file:107 isolated_file:0
                                unevictable:0 dirty:0 writeback:0
                                slab_reclaimable:18617 slab_unreclaimable:18473
                                mapped:113012 shmem:110501 pagetables:18977 bounce:0
                                free:49932 free_pcp:155 free_cma:0
nov 17 16:32:44 pascal kernel: Node 0 active_anon:2912kB inactive_anon:32251060kB active_file:324kB inactive_file:428kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:452048kB >
nov 17 16:32:44 pascal kernel: Node 0 DMA free:11796kB min:32kB low:44kB high:56kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB>
nov 17 16:32:44 pascal kernel: lowmem_reserve[]: 0 2439 32035 32035 32035
nov 17 16:32:44 pascal kernel: Node 0 DMA32 free:123516kB min:5140kB low:7636kB high:10132kB reserved_highatomic:0KB active_anon:0kB inactive_anon:2378704kB active_file:228kB inactive_file:3>
nov 17 16:32:44 pascal kernel: lowmem_reserve[]: 0 0 29595 29595 29595
nov 17 16:32:44 pascal kernel: Node 0 Normal free:64416kB min:62404kB low:92708kB high:123012kB reserved_highatomic:0KB active_anon:2912kB inactive_anon:29872356kB active_file:500kB inactive>
nov 17 16:32:44 pascal kernel: lowmem_reserve[]: 0 0 0 0 0
nov 17 16:32:44 pascal kernel: Node 0 DMA: 1*4kB (U) 2*8kB (U) 2*16kB (U) 1*32kB (U) 3*64kB (U) 2*128kB (U) 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11796kB
nov 17 16:32:44 pascal kernel: Node 0 DMA32: 87*4kB (UME) 146*8kB (UME) 172*16kB (UME) 161*32kB (UME) 154*64kB (UME) 141*128kB (UME) 94*256kB (UE) 63*512kB (UME) 24*1024kB (UME) 3*2048kB (U)>
nov 17 16:32:44 pascal kernel: Node 0 Normal: 2571*4kB (UME) 1787*8kB (UME) 1175*16kB (UME) 422*32kB (UME) 112*64kB (UME) 0*128kB 0*256kB 0*512kB 1*1024kB (M) 0*2048kB 0*4096kB = 65076kB
nov 17 16:32:44 pascal kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
nov 17 16:32:44 pascal kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
nov 17 16:32:44 pascal kernel: 110712 total pagecache pages
nov 17 16:32:44 pascal kernel: 0 pages in swap cache
nov 17 16:32:44 pascal kernel: Swap cache stats: add 0, delete 0, find 0/0
nov 17 16:32:44 pascal kernel: Free swap  = 0kB
nov 17 16:32:44 pascal kernel: Total swap = 0kB
nov 17 16:32:44 pascal kernel: 8377495 pages RAM
nov 17 16:32:44 pascal kernel: 0 pages HighMem/MovableOnly
nov 17 16:32:44 pascal kernel: 167705 pages reserved
nov 17 16:32:44 pascal kernel: 0 pages hwpoisoned
nov 17 16:32:44 pascal kernel: Tasks state (memory values in pages):
nov 17 16:32:44 pascal kernel: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
nov 17 16:32:44 pascal kernel: [    264]     0   264    19505      315   167936        0          -250 systemd-journal
nov 17 16:32:44 pascal kernel: [    285]     0   285     5209      766    81920        0         -1000 systemd-udevd
nov 17 16:32:44 pascal kernel: [    289]     0   289    19523       35    53248        0             0 lvmetad
nov 17 16:32:44 pascal kernel: [    400]     0   400     1645      118    53248        0             0 mount.ntfs-3g
nov 17 16:32:44 pascal kernel: [    541]     0   541     1660      117    49152        0             0 mount.ntfs-3g
nov 17 16:32:44 pascal kernel: [    542]     0   542     1660      120    40960        0             0 mount.ntfs-3g
nov 17 16:32:44 pascal kernel: [    543]     0   543     3067     1570    65536        0             0 mount.ntfs-3g
nov 17 16:32:44 pascal kernel: [    547]   192   547    23218      241    90112        0             0 systemd-timesyn
nov 17 16:32:44 pascal kernel: [    555]    81   555     1945      244    49152        0          -900 dbus-daemon
nov 17 16:32:44 pascal kernel: [    560]   985   560      789       70    45056        0             0 dhcpcd
nov 17 16:32:44 pascal kernel: [    561]     0   561      826       88    45056        0             0 dhcpcd
nov 17 16:32:44 pascal kernel: [    562]   985   562      706       68    45056        0             0 dhcpcd
nov 17 16:32:44 pascal kernel: [    563]   985   563      704       68    45056        0             0 dhcpcd
nov 17 16:32:44 pascal kernel: [    564]     0   564     4752      325    81920        0             0 systemd-logind
nov 17 16:32:44 pascal kernel: [    569]     0   569     1660      160    49152        0             0 login
nov 17 16:32:44 pascal kernel: [    598]  1000   598     5137      491    86016        0             0 systemd
nov 17 16:32:44 pascal kernel: [    599]  1000   599     7883      732    94208        0             0 (sd-pam)
nov 17 16:32:44 pascal kernel: [    605]  1000   605  7326490  7260247 58515456        0             0 emacs
nov 17 16:32:44 pascal kernel: [    608]  1000   608     1830      141    57344        0             0 startx
nov 17 16:32:44 pascal kernel: [    660]  1000   660      969       34    45056        0             0 xinit
nov 17 16:32:44 pascal kernel: [    661]  1000   661    82359    43891   573440        0             0 Xorg
nov 17 16:32:44 pascal kernel: [    671]  1000   671     1805      148    49152        0             0 dbus-daemon
nov 17 16:32:44 pascal kernel: [    678]   985   678      826       87    45056        0             0 dhcpcd
nov 17 16:32:44 pascal kernel: [    687]  1000   687     1797       85    49152        0             0 loginscript.sh
nov 17 16:32:44 pascal kernel: [    693]  1000   693    96527    12788   430080        0             0 compiz
nov 17 16:32:44 pascal kernel: [    694]  1000   694   332429     1374   299008        0             0 conky
nov 17 16:32:44 pascal kernel: [    695]  1000   695    11330     6942   131072        0             0 st
nov 17 16:32:44 pascal kernel: [    697]  1000   697  1108967   104024  2732032        0             0 firefox
nov 17 16:32:44 pascal kernel: [    699]  1000   699     1333       19    49152        0             0 sleep
nov 17 16:32:44 pascal kernel: [    711]  1000   711    17976      700    94208        0             0 xbindkeys
nov 17 16:32:44 pascal kernel: [    825]  1000   825    59083      227    90112        0             0 gvfsd
nov 17 16:32:44 pascal kernel: [    844]  1000   844    94755      203    90112        0             0 gvfsd-fuse
nov 17 16:32:44 pascal kernel: [    859]  1000   859     2872     1136    61440        0             0 bash
nov 17 16:32:44 pascal kernel: [   1257]  1000  1257    76110      154    81920        0             0 at-spi-bus-laun
nov 17 16:32:44 pascal kernel: [   1329]  1000  1329   667494    49108  1318912        0             0 Web Content
nov 17 16:32:44 pascal kernel: [   1418]   133  1418    38418       63    69632        0             0 rtkit-daemon
nov 17 16:32:44 pascal kernel: [   1435]   102  1435   744600     1229   253952        0             0 polkitd
nov 17 16:32:44 pascal kernel: [   1450]  1000  1450  8640623    68340  3117056        0             0 WebExtensions
nov 17 16:32:44 pascal kernel: [   1520]  1000  1520   292542     1317   143360        0             0 pulseaudio
nov 17 16:32:44 pascal kernel: [   1545]  1000  1545    59118      219    94208        0             0 gsettings-helpe
nov 17 16:32:44 pascal kernel: [   1563]  1000  1563   669234    34198  1302528        0             0 Privileged Cont
nov 17 16:32:44 pascal kernel: [   1731]  1000  1731    74255    19585   544768        0             0 RDD Process
nov 17 16:32:44 pascal kernel: [   4310]  1000  4310   135024     1465   237568        0             0 kactivitymanage
nov 17 16:32:44 pascal kernel: [   4317]  1000  4317    56426      654   151552        0             0 kglobalaccel5
nov 17 16:32:44 pascal kernel: [  10797]     0 10797    62618      410   102400        0             0 upowerd
nov 17 16:32:44 pascal kernel: [  12950]  1000 12950   712416    57515  1986560        0             0 Web Content
nov 17 16:32:44 pascal kernel: [  20825]  1000 20825   809356   104599  2670592        0             0 Web Content
nov 17 16:32:44 pascal kernel: [  21785]  1000 21785      590       19    40960        0             0 emacsclient
nov 17 16:32:44 pascal kernel: [  22350]  1000 22350   605480     5705   540672        0             0 Web Content
nov 17 16:32:44 pascal kernel: [  22402]  1000 22402   715592   296243  2822144        0             0 qbittorrent
nov 17 16:32:44 pascal kernel: [  23526]  1000 23526   359746    20170  1736704        0             0 okular
nov 17 16:32:44 pascal kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service,task=emacs,pid=>
nov 17 16:32:44 pascal kernel: Out of memory: Killed process 605 (emacs) total-vm:29305960kB, anon-rss:29035892kB, file-rss:0kB, shmem-rss:5096kB, UID:1000 pgtables:57144kB oom_score_adj:0
nov 17 16:32:45 pascal systemd[1]: Started Getty on tty2.
nov 17 16:32:45 pascal kernel: audit: type=1130 audit(1605627165.019:102): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=getty@tty2 comm="systemd" exe="/usr/lib/systemd/systemd" hostn>
nov 17 16:32:45 pascal audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=getty@tty2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=>
nov 17 16:32:45 pascal kernel: oom_reaper: reaped process 605 (emacs), now anon-rss:0kB, file-rss:0kB, shmem-rss:5096kB
nov 17 16:32:45 pascal systemd[598]: emacs.service: Main process exited, code=killed, status=9/KILL
nov 17 16:32:45 pascal systemd[598]: emacs.service: Failed with result 'signal'.
nov 17 16:32:46 pascal systemd[598]: emacs.service: Scheduled restart job, restart counter is at 1.
nov 17 16:32:46 pascal systemd[598]: Stopped Emacs text editor.
nov 17 16:32:46 pascal systemd[598]: Starting Emacs text editor...
nov 17 16:32:46 pascal emacs[29603]: Loading /home/arthur/.emacs.d/lisp/init.el (source)...
nov 17 16:32:47 pascal emacs[29603]: Loading /home/arthur/.emacs.d/etc/recentf...
nov 17 16:32:47 pascal emacs[29603]: Loading /home/arthur/.emacs.d/etc/recentf...done
nov 17 16:32:47 pascal emacs[29603]: Loading /home/arthur/.emacs.d/lisp/emacs-custom.el (source)...
nov 17 16:32:47 pascal emacs[29603]: Loading /home/arthur/.emacs.d/lisp/emacs-custom.el (source)...done
nov 17 16:32:47 pascal emacs[29603]: Warning (defvaralias): Overwriting value of ‘save-place’ by aliasing to
nov 17 16:32:47 pascal emacs[29603]: ‘save-place-mode’ Disable showing Disable logging
nov 17 16:32:47 pascal emacs[29603]: Loading /home/arthur/.emacs.d/etc/recentf...
nov 17 16:32:47 pascal emacs[29603]: Loading /home/arthur/.emacs.d/etc/recentf...done
nov 17 16:32:48 pascal emacs[29603]: Loading /home/arthur/.emacs.d/lisp/init.el (source)...done
nov 17 16:32:48 pascal emacs[29603]: Starting Emacs daemon.
nov 17 16:32:48 pascal systemd[598]: Started Emacs text editor.
nov 17 16:32:48 pascal emacs[29603]: Package nnir is deprecated
nov 17 16:32:48 pascal rtkit-daemon[1418]: Supervising 6 threads of 3 processes of 1 users.
nov 17 16:32:48 pascal rtkit-daemon[1418]: Successfully made thread 29609 of process 1520 owned by '1000' RT at priority 5.
nov 17 16:32:48 pascal rtkit-daemon[1418]: Supervising 7 threads of 3 processes of 1 users.

^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 17:45                     ` Eli Zaretskii
@ 2020-11-23 18:40                       ` Arthur Miller
  2020-11-23 19:23                         ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 18:40 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: Jean Louis <bugs@gnu.support>,  fweimer@redhat.com,
>>   43389@debbugs.gnu.org,  dj@redhat.com,  michael_heerdegen@web.de,
>>   trevor@trevorbentley.com,  carlos@redhat.com
>> Date: Mon, 23 Nov 2020 18:29:40 +0100
>> 
>> For me it happends like really, really fast. Things work normally, and
>> then suddenly everythign freezes, and after first freeze, it takes for
>> every to see result of any keypress. For example video in Firefox gets
>> slow down to like a frame per minut or so; I can see that system is
>> alive, but it is impossible to type something like (garbage-collect) and
>> see the result; I would be sitting here for a day :-). 
>
> That doesn't sound like a memory problem to me.
Ok; acknowledged; any idea what it could be? I have attached you a
syslog from one crash point, you can see Emacs is using almost 8gig or
RAM, but I have 32, so there is plenty of unused RAM over. Mayve Emacs
internal book keeping of memory? Number of pages? I have no idea myself,
sorry if I am not so helpful.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 15:54                     ` Carlos O'Donell
@ 2020-11-23 18:58                       ` Jean Louis
  2020-11-23 19:34                         ` Eli Zaretskii
  2020-11-23 19:37                         ` Carlos O'Donell
  0 siblings, 2 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 18:58 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor

* Carlos O'Donell <carlos@redhat.com> [2020-11-23 18:54]:
> On 11/23/20 8:27 AM, Jean Louis wrote:
> > * Eli Zaretskii <eliz@gnu.org> [2020-11-22 23:17]:
> >>> Date: Sun, 22 Nov 2020 22:52:14 +0300
> >>> From: Jean Louis <bugs@gnu.support>
> >>> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
> >>>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
> >>>
> >>> I am now following this strategy here:
> >>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> >>
> >> That uses a different implementation of malloc, so I'm not sure it
> >> will help us.
> > 
> > This is how I have run the shorter Emacs session until it got blocked:
> > 
> > MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> > 
> > And here is mtrace:
> > 
> > https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
> > 
> > I cannot run Emacs that way as something happens and Emacs get
> > blocked. Problem arrives with M-s M-w to search for anything on
> > Internet with eww. Anything blocks. And I get message:
> > 
> > error in process filter: Quit
> 
> Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
> which may affect the process if using pipes.

# MTRACE_CTL_VERBOSE=1
MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1

I have tried like above and it will block as soon as eww is loads some
page with the same error as previously.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 18:34                           ` Arthur Miller
@ 2020-11-23 19:06                             ` Jean Louis
  2020-11-23 19:15                             ` Eli Zaretskii
  1 sibling, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 19:06 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Arthur Miller <arthur.miller@live.com> [2020-11-23 21:34]:
> It has past long since v 26, and pdumber as well :-) You know I am
> rebuilding all the time and am on relatively latest master so I would
> have noticed it earlier, so it must be something since last month or so,
> I am not claiming anything exact, but not too far ago.
 
I do not remember having this problem by the Bwindi Impenetrable
Forest until July 14th, and computer was all the time turned on, went
to sleep, turned on. But it was different computer with 8GB while this
one is 4GB.

I was using EXWM. My experience is similar to Arthur's though I think
it is little longer then one month.

Maybe instead of all debuggers our human experience can find
approximate change introduced.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 18:34                           ` Arthur Miller
  2020-11-23 19:06                             ` Jean Louis
@ 2020-11-23 19:15                             ` Eli Zaretskii
  2020-11-23 19:49                               ` Arthur Miller
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 19:15 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

> From: Arthur Miller <arthur.miller@live.com>
> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>   carlos@redhat.com
> Date: Mon, 23 Nov 2020 19:34:26 +0100
> 
> >> This has to be something introduced fairly recently, right?
> >
> > Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
> > somewhat differently, and OTOH glibc removed some of the malloc hooks
> > we used to use in versions of Emacs before 26.  In addition, glibc is
> > also being developed, and maybe some change there somehow triggered
> > this.
> It has past long since v 26, and pdumber as well :-) You know I am
> rebuilding all the time and am on relatively latest master so I would
> have noticed it earlier, so it must be something since last month or so,

Not necessarily.  This problem seems to happen rarely, and not for
everyone.  So it's entirely possible you didn't see it by sheer luck.

> > If the problem is memory, I suggest to look at the system log to see
> > if there are any signs of that.
> Nothing else crashes, and I have 32 gig, so I am not sure what can be a
> problem.

Then it most probably isn't memory.  IOW, not the problem discussed in
this bug report.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 18:40                       ` Arthur Miller
@ 2020-11-23 19:23                         ` Eli Zaretskii
  2020-11-23 19:38                           ` Arthur Miller
  2020-11-23 19:39                           ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  0 siblings, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 19:23 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

> From: Arthur Miller <arthur.miller@live.com>
> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>   carlos@redhat.com
> Date: Mon, 23 Nov 2020 19:40:23 +0100
> 
> > That doesn't sound like a memory problem to me.
> Ok; acknowledged; any idea what it could be?

Actually, I take that back: it does look like the OOM killer that
killed Emacs:

  nov 17 16:32:44 pascal kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service,task=emacs,pid=>
  nov 17 16:32:44 pascal kernel: Out of memory: Killed process 605 (emacs) total-vm:29305960kB, anon-rss:29035892kB, file-rss:0kB, shmem-rss:5096kB, UID:1000 pgtables:57144kB oom_score_adj:0

> I have attached you a syslog from one crash point, you can see Emacs
> is using almost 8gig or RAM, but I have 32, so there is plenty of
> unused RAM over.

It says above that the total VM size of the Emacs process was 29GB,
not 8.

So maybe yours is the same problem after all.

How about writing a simple function that reports the total VM size of
the Emacs process (via process-attributes), and running it from some
timer?  Then you could see how long it takes you to get from, say, 2GB
to more than 20GB, and maybe also take notes of what you are doing at
that time?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 18:58                       ` Jean Louis
@ 2020-11-23 19:34                         ` Eli Zaretskii
  2020-11-23 19:49                           ` Jean Louis
  2020-11-23 20:04                           ` Carlos O'Donell
  2020-11-23 19:37                         ` Carlos O'Donell
  1 sibling, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 19:34 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

> Date: Mon, 23 Nov 2020 21:58:28 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: Eli Zaretskii <eliz@gnu.org>, fweimer@redhat.com,
>   43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de,
>   trevor@trevorbentley.com
> 
> > Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
> > which may affect the process if using pipes.
> 
> # MTRACE_CTL_VERBOSE=1
> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1

Any reason you redirect stderr to stdout?  I'm not saying that is the
reason for the EWW problems, but just to be sure, can you try without
that?  The trace goes to stderr, right?  So just "2> file" should be
sufficient to collect the trace.  Carlos, am I right?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 18:58                       ` Jean Louis
  2020-11-23 19:34                         ` Eli Zaretskii
@ 2020-11-23 19:37                         ` Carlos O'Donell
  2020-11-23 19:55                           ` Jean Louis
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-23 19:37 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor

On 11/23/20 1:58 PM, Jean Louis wrote:
> * Carlos O'Donell <carlos@redhat.com> [2020-11-23 18:54]:
>> On 11/23/20 8:27 AM, Jean Louis wrote:
>>> * Eli Zaretskii <eliz@gnu.org> [2020-11-22 23:17]:
>>>>> Date: Sun, 22 Nov 2020 22:52:14 +0300
>>>>> From: Jean Louis <bugs@gnu.support>
>>>>> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>>>>>   michael_heerdegen@web.de, trevor@trevorbentley.com, carlos@redhat.com
>>>>>
>>>>> I am now following this strategy here:
>>>>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
>>>>
>>>> That uses a different implementation of malloc, so I'm not sure it
>>>> will help us.
>>>
>>> This is how I have run the shorter Emacs session until it got blocked:
>>>
>>> MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
>>>
>>> And here is mtrace:
>>>
>>> https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
>>>
>>> I cannot run Emacs that way as something happens and Emacs get
>>> blocked. Problem arrives with M-s M-w to search for anything on
>>> Internet with eww. Anything blocks. And I get message:
>>>
>>> error in process filter: Quit
>>
>> Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
>> which may affect the process if using pipes.
> 
> # MTRACE_CTL_VERBOSE=1
> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> I have tried like above and it will block as soon as eww is loads some
> page with the same error as previously.

That's interesting. Are you able to attach gdb and get a backtrace to see
what the process is blocked on?

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:23                         ` Eli Zaretskii
@ 2020-11-23 19:38                           ` Arthur Miller
  2020-11-23 19:52                             ` Eli Zaretskii
  2020-11-23 19:39                           ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  1 sibling, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 19:38 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>>   carlos@redhat.com
>> Date: Mon, 23 Nov 2020 19:40:23 +0100
>> 
>> > That doesn't sound like a memory problem to me.
>> Ok; acknowledged; any idea what it could be?
>
> Actually, I take that back: it does look like the OOM killer that
> killed Emacs:
>
>   nov 17 16:32:44 pascal kernel:
> oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service,task=emacs,pid=>
>   nov 17 16:32:44 pascal kernel: Out of memory: Killed process 605 (emacs)
> total-vm:29305960kB, anon-rss:29035892kB, file-rss:0kB, shmem-rss:5096kB,
> UID:1000 pgtables:57144kB oom_score_adj:0

>> I have attached you a syslog from one crash point, you can see Emacs
>> is using almost 8gig or RAM, but I have 32, so there is plenty of
>> unused RAM over.
Haha, I'm such a noob :-). You have eagle eye; I wasn't looking
carefully. I just looked at the process list which showed ~7 gig or ram.

> It says above that the total VM size of the Emacs process was 29GB,
> not 8.
>
> So maybe yours is the same problem after all.

> How about writing a simple function that reports the total VM size of
> the Emacs process (via process-attributes), and running it from some
> timer?  Then you could see how long it takes you to get from, say, 2GB
> to more than 20GB, and maybe also take notes of what you are doing at
> that time?
Ouch; I have to look up (process-attributes) in the info ... :-(. I
planned to do something else today, but I'll give it a look.

By the way; I haven't experienced this since 18th this month; day after
when I rebuild. So it has been almost 5 days without a crash. But I also
don't shift big folders any more; I cleanud up my old backup drive.
Is there some hefty ram-tasking benchmark with lots of random list
creations and deletions I could run; maybe some suitable ert-test
already written?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:23                         ` Eli Zaretskii
  2020-11-23 19:38                           ` Arthur Miller
@ 2020-11-23 19:39                           ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  2020-11-23 19:59                             ` Arthur Miller
  1 sibling, 1 reply; 166+ messages in thread
From: Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors @ 2020-11-23 19:39 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos,
	Arthur Miller

I think would be nice to have a script that monitors Emacs memory
footprint and attach gdb on it when the memory usage is over a certain
(high) threshold.

This way should be easy to see what we are doing because at that point
we are supposed to be allocating extremely often.

  Andrea





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:15                             ` Eli Zaretskii
@ 2020-11-23 19:49                               ` Arthur Miller
  2020-11-23 20:04                                 ` Eli Zaretskii
  2020-11-23 20:31                                 ` Jean Louis
  0 siblings, 2 replies; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 19:49 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>>   carlos@redhat.com
>> Date: Mon, 23 Nov 2020 19:34:26 +0100
>> 
>> >> This has to be something introduced fairly recently, right?
>> >
>> > Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
>> > somewhat differently, and OTOH glibc removed some of the malloc hooks
>> > we used to use in versions of Emacs before 26.  In addition, glibc is
>> > also being developed, and maybe some change there somehow triggered
>> > this.
>> It has past long since v 26, and pdumber as well :-) You know I am
>> rebuilding all the time and am on relatively latest master so I would
>> have noticed it earlier, so it must be something since last month or so,
>
> Not necessarily.  This problem seems to happen rarely, and not for
> everyone.  So it's entirely possible you didn't see it by sheer luck.
Of course, but why would I suddently start to experience it? Neither my
usage pattern not even Emacs or system configuration changed at that
time.Can't be just shear luck, I haven'tdone anything differently that I
wasn't doing 2 or 6 months before; same ol; just newer master & system
updates.

The only thing that changed regularly was of course system updates: kernel,
gcc & co etc. So it maybe is as mentioned earlier in this thread by
either you or somebody else is that glibc changed and that maybe
triggers something in Emacs based on how Emacs use it. I don't know I am
not expert in this. Isn't Valgrind good for this kind of problems? Can I
run emacs as a systemd service in Valgrind?






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:34                         ` Eli Zaretskii
@ 2020-11-23 19:49                           ` Jean Louis
  2020-11-23 20:04                           ` Carlos O'Donell
  1 sibling, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 19:49 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

* Eli Zaretskii <eliz@gnu.org> [2020-11-23 22:35]:
> > Date: Mon, 23 Nov 2020 21:58:28 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: Eli Zaretskii <eliz@gnu.org>, fweimer@redhat.com,
> >   43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de,
> >   trevor@trevorbentley.com
> > 
> > > Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
> > > which may affect the process if using pipes.
> > 
> > # MTRACE_CTL_VERBOSE=1
> > MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> Any reason you redirect stderr to stdout?  I'm not saying that is the
> reason for the EWW problems, but just to be sure, can you try without
> that?  The trace goes to stderr, right?  So just "2> file" should be
> sufficient to collect the trace.  Carlos, am I right?

That could be. I have just tried with:

MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs

and there is some lock, I have to invoke xkill to kill Emacs.

I wonder why it worked before.

Now it blocks also like this:

LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs

It must be something with my configuration, so I will research and try
again when I find what is the problem.







^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 13:27                   ` Jean Louis
  2020-11-23 15:54                     ` Carlos O'Donell
@ 2020-11-23 19:50                     ` Carlos O'Donell
  2020-11-23 19:59                       ` Jean Louis
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-23 19:50 UTC (permalink / raw)
  To: Jean Louis, Eli Zaretskii; +Cc: fweimer, 43389, trevor, dj, michael_heerdegen

On 11/23/20 8:27 AM, Jean Louis wrote:
> And here is mtrace:
> https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz

Initial analysis is up:
https://sourceware.org/glibc/wiki/emacs-malloc

Nothing conclusive.

We need a longer trace that shows the problem.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:38                           ` Arthur Miller
@ 2020-11-23 19:52                             ` Eli Zaretskii
  2020-11-23 20:03                               ` Arthur Miller
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 19:52 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

> From: Arthur Miller <arthur.miller@live.com>
> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>   carlos@redhat.com
> Date: Mon, 23 Nov 2020 20:38:45 +0100
> 
> By the way; I haven't experienced this since 18th this month; day after
> when I rebuild. So it has been almost 5 days without a crash. But I also
> don't shift big folders any more; I cleanud up my old backup drive.
> Is there some hefty ram-tasking benchmark with lots of random list
> creations and deletions I could run; maybe some suitable ert-test
> already written?

I don't think so, and we don't have a clear idea yet regarding what
exactly causes this, so it's difficult to know what could be
relevant.  We must wait until something like that happen, and collect
data then.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:37                         ` Carlos O'Donell
@ 2020-11-23 19:55                           ` Jean Louis
  2020-11-23 20:06                             ` Carlos O'Donell
  2020-11-23 20:10                             ` Eli Zaretskii
  0 siblings, 2 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 19:55 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor

* Carlos O'Donell <carlos@redhat.com> [2020-11-23 22:37]:
> > 
> > # MTRACE_CTL_VERBOSE=1
> > MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> > 
> > I have tried like above and it will block as soon as eww is loads some
> > page with the same error as previously.
> 
> That's interesting. Are you able to attach gdb and get a backtrace to see
> what the process is blocked on?

I can do C-g one time to interrupt something going on, then I get error

(gdb) continue
Continuing.
[New Thread 0x7f10ed01fc00 (LWP 25293)]
[New Thread 0x7f10ed007c00 (LWP 25294)]
[New Thread 0x7f10ecfefc00 (LWP 25295)]
[New Thread 0x7f10ecfd7c00 (LWP 25296)]
[Thread 0x7f10ed01fc00 (LWP 25293) exited]
[Thread 0x7f10ed007c00 (LWP 25294) exited]
[Thread 0x7f10ecfd7c00 (LWP 25296) exited]
[Thread 0x7f10ecfefc00 (LWP 25295) exited]
HERE I cannot do anything with GDB prompt, there is no prompt, I can
C-c and I get:

gdb) continue
Continuing.
[New Thread 0x7f10ed01fc00 (LWP 25293)]
[New Thread 0x7f10ed007c00 (LWP 25294)]
[New Thread 0x7f10ecfefc00 (LWP 25295)]
[New Thread 0x7f10ecfd7c00 (LWP 25296)]
[Thread 0x7f10ed01fc00 (LWP 25293) exited]
[Thread 0x7f10ed007c00 (LWP 25294) exited]
[Thread 0x7f10ecfd7c00 (LWP 25296) exited]
[Thread 0x7f10ecfefc00 (LWP 25295) exited]

continue
^C
Thread 1 "emacs" received signal SIGINT, Interrupt.
0x00007f10fe08fe7d in read () from /lib/libpthread.so.0






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:39                           ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
@ 2020-11-23 19:59                             ` Arthur Miller
  2020-11-23 20:15                               ` Eli Zaretskii
  2020-11-23 20:53                               ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  0 siblings, 2 replies; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 19:59 UTC (permalink / raw)
  To: Andrea Corallo
  Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

Andrea Corallo <akrl@sdf.org> writes:

> I think would be nice to have a script that monitors Emacs memory
> footprint and attach gdb on it when the memory usage is over a certain
> (high) threshold.
>
> This way should be easy to see what we are doing because at that point
> we are supposed to be allocating extremely often.
>
>   Andrea
Indeed.


How hard/possible is to use this tool in Emacs:

https://gperftools.github.io/gperftools/heapprofile.html

By the way, has anyone tried this one (heaptrack):

https://github.com/KDE/heaptrack





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:50                     ` Carlos O'Donell
@ 2020-11-23 19:59                       ` Jean Louis
  0 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 19:59 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor

* Carlos O'Donell <carlos@redhat.com> [2020-11-23 22:50]:
> On 11/23/20 8:27 AM, Jean Louis wrote:
> > And here is mtrace:
> > https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
> 
> Initial analysis is up:
> https://sourceware.org/glibc/wiki/emacs-malloc
> 
> Nothing conclusive.
> 
> We need a longer trace that shows the problem.

At least it says there is nothing pathological with my behavior :-)

And it could be just wrong indication.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:52                             ` Eli Zaretskii
@ 2020-11-23 20:03                               ` Arthur Miller
  0 siblings, 0 replies; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 20:03 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>>   carlos@redhat.com
>> Date: Mon, 23 Nov 2020 20:38:45 +0100
>> 
>> By the way; I haven't experienced this since 18th this month; day after
>> when I rebuild. So it has been almost 5 days without a crash. But I also
>> don't shift big folders any more; I cleanud up my old backup drive.
>> Is there some hefty ram-tasking benchmark with lots of random list
>> creations and deletions I could run; maybe some suitable ert-test
>> already written?
>
> I don't think so, and we don't have a clear idea yet regarding what
> exactly causes this, so it's difficult to know what could be
> relevant.  We must wait until something like that happen, and collect
> data then.
Yes yes, ok. thanks.

I'll try to build heaptrack and see if it works well with Emacs first;
I'm little bit curious about it.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:34                         ` Eli Zaretskii
  2020-11-23 19:49                           ` Jean Louis
@ 2020-11-23 20:04                           ` Carlos O'Donell
  2020-11-23 20:16                             ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-23 20:04 UTC (permalink / raw)
  To: Eli Zaretskii, Jean Louis; +Cc: fweimer, 43389, trevor, dj, michael_heerdegen

On 11/23/20 2:34 PM, Eli Zaretskii wrote:
>> Date: Mon, 23 Nov 2020 21:58:28 +0300
>> From: Jean Louis <bugs@gnu.support>
>> Cc: Eli Zaretskii <eliz@gnu.org>, fweimer@redhat.com,
>>   43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de,
>>   trevor@trevorbentley.com
>>
>>> Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
>>> which may affect the process if using pipes.
>>
>> # MTRACE_CTL_VERBOSE=1
>> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> Any reason you redirect stderr to stdout?  I'm not saying that is the
> reason for the EWW problems, but just to be sure, can you try without
> that?  The trace goes to stderr, right?  So just "2> file" should be
> sufficient to collect the trace.  Carlos, am I right?
 
No, the trace goes to the trace file specified by MTRACT_CTL_FILE.

By default the tracer is as minimally intrusive as possible.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:49                               ` Arthur Miller
@ 2020-11-23 20:04                                 ` Eli Zaretskii
  2020-11-23 21:12                                   ` Arthur Miller
  2020-11-24  2:07                                   ` Arthur Miller
  2020-11-23 20:31                                 ` Jean Louis
  1 sibling, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 20:04 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

> From: Arthur Miller <arthur.miller@live.com>
> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>   carlos@redhat.com
> Date: Mon, 23 Nov 2020 20:49:48 +0100
> 
> Isn't Valgrind good for this kind of problems? Can I run emacs as a
> systemd service in Valgrind?

You can run Emacs under Valgrind, see etc/DEBUG for the details.  But
I'm not sure it will work as systemd service.

Valgrind is only the right tool if we think there's a memory leak in
Emacs itself.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:55                           ` Jean Louis
@ 2020-11-23 20:06                             ` Carlos O'Donell
  2020-11-23 20:18                               ` Jean Louis
  2020-11-23 20:10                             ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-23 20:06 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor

On 11/23/20 2:55 PM, Jean Louis wrote:
> * Carlos O'Donell <carlos@redhat.com> [2020-11-23 22:37]:
>>>
>>> # MTRACE_CTL_VERBOSE=1
>>> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
>>>
>>> I have tried like above and it will block as soon as eww is loads some
>>> page with the same error as previously.
>>
>> That's interesting. Are you able to attach gdb and get a backtrace to see
>> what the process is blocked on?
> 
> I can do C-g one time to interrupt something going on, then I get error
> 
> (gdb) continue
Please issue 'thread apply all backtrace' to get a backtrace from all
the threads to see where they are stuck.

You will need debug information for this for all associated frames in
the backtrace. Depending on your distribution this may require debug
information packages.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:55                           ` Jean Louis
  2020-11-23 20:06                             ` Carlos O'Donell
@ 2020-11-23 20:10                             ` Eli Zaretskii
  1 sibling, 0 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 20:10 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

> Date: Mon, 23 Nov 2020 22:55:10 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: Eli Zaretskii <eliz@gnu.org>, fweimer@redhat.com,
>   43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de,
>   trevor@trevorbentley.com
> 
> > That's interesting. Are you able to attach gdb and get a backtrace to see
> > what the process is blocked on?
> 
> I can do C-g one time to interrupt something going on, then I get error
> 
> (gdb) continue
> Continuing.

Instead of "continue", type "thread apply all bt", and post the
result.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:59                             ` Arthur Miller
@ 2020-11-23 20:15                               ` Eli Zaretskii
  2020-11-23 21:15                                 ` Arthur Miller
  2020-11-23 20:53                               ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 20:15 UTC (permalink / raw)
  To: Arthur Miller
  Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos, akrl

> From: Arthur Miller <arthur.miller@live.com>
> Cc: Eli Zaretskii <eliz@gnu.org>,  fweimer@redhat.com,
>   43389@debbugs.gnu.org,  bugs@gnu.support,  dj@redhat.com,
>   michael_heerdegen@web.de,  trevor@trevorbentley.com,  carlos@redhat.com
> Date: Mon, 23 Nov 2020 20:59:21 +0100
> 
> How hard/possible is to use this tool in Emacs:
> 
> https://gperftools.github.io/gperftools/heapprofile.html

AFAIU, this cannot be used with glibc's malloc, it needs libtcmalloc
instead.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:04                           ` Carlos O'Donell
@ 2020-11-23 20:16                             ` Eli Zaretskii
  0 siblings, 0 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 20:16 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>  michael_heerdegen@web.de, trevor@trevorbentley.com
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Mon, 23 Nov 2020 15:04:33 -0500
> 
> > Any reason you redirect stderr to stdout?  I'm not saying that is the
> > reason for the EWW problems, but just to be sure, can you try without
> > that?  The trace goes to stderr, right?  So just "2> file" should be
> > sufficient to collect the trace.  Carlos, am I right?
>  
> No, the trace goes to the trace file specified by MTRACT_CTL_FILE.

Thanks, that's even easier: it means no standard stream needs to be
redirected.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:06                             ` Carlos O'Donell
@ 2020-11-23 20:18                               ` Jean Louis
  2020-11-23 20:31                                 ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-23 20:18 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor

* Carlos O'Donell <carlos@redhat.com> [2020-11-23 23:06]:
> On 11/23/20 2:55 PM, Jean Louis wrote:
> > * Carlos O'Donell <carlos@redhat.com> [2020-11-23 22:37]:
> >>>
> >>> # MTRACE_CTL_VERBOSE=1
> >>> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> >>>
> >>> I have tried like above and it will block as soon as eww is loads some
> >>> page with the same error as previously.
> >>
> >> That's interesting. Are you able to attach gdb and get a backtrace to see
> >> what the process is blocked on?
> > 
> > I can do C-g one time to interrupt something going on, then I get error
> > 
> > (gdb) continue
> Please issue 'thread apply all backtrace' to get a backtrace from all
> the threads to see where they are stuck.
> 
> You will need debug information for this for all associated frames in
> the backtrace. Depending on your distribution this may require debug
> information packages.

sudo gdb -pid 25584
GNU gdb (GDB) 7.12.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 25584
[New LWP 25585]
[New LWP 25586]
[New LWP 25588]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/libthread_db.so.1".
0x00007f6afd4765dc in pselect () from /lib/libc.so.6
(gdb) continue
Continuing.
[New Thread 0x7f6aed1dbc00 (LWP 25627)]
[Thread 0x7f6aed1dbc00 (LWP 25627) exited]
[New Thread 0x7f6aed1dbc00 (LWP 25628)]
[Thread 0x7f6aed1dbc00 (LWP 25628) exited]
  C-c C-c
Thread 1 "emacs" received signal SIGINT, Interrupt.
0x00007f6afd4765dc in pselect () from /lib/libc.so.6
(gdb) thread apply backtrace
Invalid thread ID: backtrace
(gdb) thread apply all backtrace

Thread 4 (Thread 0x7f6aee2ae700 (LWP 25588)):
#0  0x00007f6afd47435d in poll () at /lib/libc.so.6
#1  0x00007f6b011a4b98 in  () at /lib/libglib-2.0.so.0
#2  0x00007f6b011a4f52 in g_main_loop_run () at /lib/libglib-2.0.so.0
#3  0x00007f6b019b62c8 in  () at /usr/lib/libgio-2.0.so.0
#4  0x00007f6b011ccfca in  () at /lib/libglib-2.0.so.0
#5  0x00007f6afe242069 in start_thread () at /lib/libpthread.so.0
#6  0x00007f6afd47e30f in clone () at /lib/libc.so.6

Thread 3 (Thread 0x7f6aeeaaf700 (LWP 25586)):
#0  0x00007f6afd47435d in poll () at /lib/libc.so.6
#1  0x00007f6b011a4b98 in  () at /lib/libglib-2.0.so.0
#2  0x00007f6b011a4cbe in g_main_context_iteration () at /lib/libglib-2.0.so.0
#3  0x00007f6aeeab755d in  () at /usr/lib/gio/modules/libdconfsettings.so
#4  0x00007f6b011ccfca in  () at /lib/libglib-2.0.so.0
#5  0x00007f6afe242069 in start_thread () at /lib/libpthread.so.0
#6  0x00007f6afd47e30f in clone () at /lib/libc.so.6

Thread 2 (Thread 0x7f6aef6c8700 (LWP 25585)):
#0  0x00007f6afd47435d in poll () at /lib/libc.so.6
#1  0x00007f6b011a4b98 in  () at /lib/libglib-2.0.so.0
---Type <return> to continue, or q <return> to quit---
#2  0x00007f6b011a4cbe in g_main_context_iteration () at /lib/libglib-2.0.so.0
#3  0x00007f6b011a4d12 in  () at /lib/libglib-2.0.so.0
#4  0x00007f6b011ccfca in  () at /lib/libglib-2.0.so.0
#5  0x00007f6afe242069 in start_thread () at /lib/libpthread.so.0
#6  0x00007f6afd47e30f in clone () at /lib/libc.so.6

Thread 1 (Thread 0x7f6b049e9100 (LWP 25584)):
#0  0x00007f6afd4765dc in pselect () at /lib/libc.so.6
#1  0x00000000005cf500 in really_call_select (arg=0x7ffc16edfa80) at thread.c:592
#2  0x00000000005d006e in flush_stack_call_func (arg=0x7ffc16edfa80, func=0x5cf4b0 <really_call_select>) at lisp.h:3791
#3  0x00000000005d006e in thread_select (func=<optimized out>, max_fds=max_fds@entry=19, rfds=rfds@entry=0x7ffc16edfb60, wfds=wfds@entry=0x7ffc16edfbe0, efds=efds@entry=0x0, timeout=timeout@entry=0x7ffc16ee0170, sigmask=0x0) at thread.c:624
#4  0x00000000005eb023 in xg_select (fds_lim=19, rfds=rfds@entry=0x7ffc16ee02a0, wfds=0x7ffc16ee0320, efds=<optimized out>, timeout=<optimized out>, sigmask=<optimized out>) at xgselect.c:131
#5  0x00000000005aeab4 in wait_reading_process_output (time_limit=time_limit@entry=30, nsecs=nsecs@entry=0, read_kbd=-1, do_display=do_display@entry=true, wait_for_cell=wait_for_cell@entry=0x0, wait_proc=wait_proc@entry=0x0, just_wait_proc=0) at process.c:5604
#6  0x00000000004253f8 in sit_for (timeout=timeout@entry=0x7a, reading=reading@entry=true, display_option=display_option@entry=1) at dispnew.c:6111
#7  0x00000000004fe415 in read_char (commandflag=commandflag@entry=1, map=map@entry=0x3184a63, p---Type <return> to continue, or q <return> to quit---
rev_event=<optimized out>, used_mouse_menu=used_mouse_menu@entry=0x7ffc16ee0b5b, end_time=end_time@entry=0x0) at keyboard.c:2742
#8  0x0000000000500841 in read_key_sequence (keybuf=keybuf@entry=0x7ffc16ee0c50, prompt=prompt@entry=0x0, dont_downcase_last=dont_downcase_last@entry=false, can_return_switch_frame=can_return_switch_frame@entry=true, fix_current_buffer=fix_current_buffer@entry=true, prevent_redisplay=prevent_redisplay@entry=false) at keyboard.c:9546
#9  0x0000000000502040 in command_loop_1 () at keyboard.c:1354
#10 0x000000000056a40e in internal_condition_case (bfun=bfun@entry=0x501e30 <command_loop_1>, handlers=handlers@entry=0x90, hfun=hfun@entry=0x4f8da0 <cmd_error>) at eval.c:1359
#11 0x00000000004f370c in command_loop_2 (ignore=ignore@entry=0x0) at keyboard.c:1095
#12 0x000000000056a3ac in internal_catch (tag=tag@entry=0xd740, func=func@entry=0x4f36f0 <command_loop_2>, arg=arg@entry=0x0) at eval.c:1120
#13 0x00000000004f36c9 in command_loop () at keyboard.c:1074
#14 0x00000000004f89c6 in recursive_edit_1 () at keyboard.c:718
#15 0x00000000004f8ce4 in Frecursive_edit () at keyboard.c:790
#16 0x000000000041a8f3 in main (argc=1, argv=0x7ffc16ee1048) at emacs.c:2047
(gdb) 





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:18                               ` Jean Louis
@ 2020-11-23 20:31                                 ` Eli Zaretskii
  2020-11-23 20:41                                   ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-23 20:31 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

> Date: Mon, 23 Nov 2020 23:18:13 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: Eli Zaretskii <eliz@gnu.org>, fweimer@redhat.com,
>   43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de,
>   trevor@trevorbentley.com
> 
> Thread 1 (Thread 0x7f6b049e9100 (LWP 25584)):
> #0  0x00007f6afd4765dc in pselect () at /lib/libc.so.6
> #1  0x00000000005cf500 in really_call_select (arg=0x7ffc16edfa80) at thread.c:592
> #2  0x00000000005d006e in flush_stack_call_func (arg=0x7ffc16edfa80, func=0x5cf4b0 <really_call_select>) at lisp.h:3791
> #3  0x00000000005d006e in thread_select (func=<optimized out>, max_fds=max_fds@entry=19, rfds=rfds@entry=0x7ffc16edfb60, wfds=wfds@entry=0x7ffc16edfbe0, efds=efds@entry=0x0, timeout=timeout@entry=0x7ffc16ee0170, sigmask=0x0) at thread.c:624
> #4  0x00000000005eb023 in xg_select (fds_lim=19, rfds=rfds@entry=0x7ffc16ee02a0, wfds=0x7ffc16ee0320, efds=<optimized out>, timeout=<optimized out>, sigmask=<optimized out>) at xgselect.c:131
> #5  0x00000000005aeab4 in wait_reading_process_output (time_limit=time_limit@entry=30, nsecs=nsecs@entry=0, read_kbd=-1, do_display=do_display@entry=true, wait_for_cell=wait_for_cell@entry=0x0, wait_proc=wait_proc@entry=0x0, just_wait_proc=0) at process.c:5604
> #6  0x00000000004253f8 in sit_for (timeout=timeout@entry=0x7a, reading=reading@entry=true, display_option=display_option@entry=1) at dispnew.c:6111
> #7  0x00000000004fe415 in read_char (commandflag=commandflag@entry=1, map=map@entry=0x3184a63, p---Type <return> to continue, or q <return> to quit---
> rev_event=<optimized out>, used_mouse_menu=used_mouse_menu@entry=0x7ffc16ee0b5b, end_time=end_time@entry=0x0) at keyboard.c:2742
> #8  0x0000000000500841 in read_key_sequence (keybuf=keybuf@entry=0x7ffc16ee0c50, prompt=prompt@entry=0x0, dont_downcase_last=dont_downcase_last@entry=false, can_return_switch_frame=can_return_switch_frame@entry=true, fix_current_buffer=fix_current_buffer@entry=true, prevent_redisplay=prevent_redisplay@entry=false) at keyboard.c:9546
> #9  0x0000000000502040 in command_loop_1 () at keyboard.c:1354
> #10 0x000000000056a40e in internal_condition_case (bfun=bfun@entry=0x501e30 <command_loop_1>, handlers=handlers@entry=0x90, hfun=hfun@entry=0x4f8da0 <cmd_error>) at eval.c:1359

This says Emacs is simply waiting for input.

Are you saying Emacs doesn't respond to keyboard input in this state?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:49                               ` Arthur Miller
  2020-11-23 20:04                                 ` Eli Zaretskii
@ 2020-11-23 20:31                                 ` Jean Louis
  2020-11-23 21:22                                   ` Arthur Miller
  1 sibling, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-23 20:31 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

* Arthur Miller <arthur.miller@live.com> [2020-11-23 23:22]:
> The only thing that changed regularly was of course system updates: kernel,
> gcc & co etc. So it maybe is as mentioned earlier in this thread by
> either you or somebody else is that glibc changed and that maybe
> triggers something in Emacs based on how Emacs use it. I don't know I am
> not expert in this. Isn't Valgrind good for this kind of problems? Can I
> run emacs as a systemd service in Valgrind?

I did not change anything like glibc or kernel in Hyperbola
GNU/Linux-libre






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:31                                 ` Eli Zaretskii
@ 2020-11-23 20:41                                   ` Jean Louis
  2020-11-23 20:53                                     ` Andreas Schwab
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-23 20:41 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

* Eli Zaretskii <eliz@gnu.org> [2020-11-23 23:32]:
> > Date: Mon, 23 Nov 2020 23:18:13 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: Eli Zaretskii <eliz@gnu.org>, fweimer@redhat.com,
> >   43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de,
> >   trevor@trevorbentley.com
> > 
> > Thread 1 (Thread 0x7f6b049e9100 (LWP 25584)):
> > #0  0x00007f6afd4765dc in pselect () at /lib/libc.so.6
> > #1  0x00000000005cf500 in really_call_select (arg=0x7ffc16edfa80) at thread.c:592
> > #2  0x00000000005d006e in flush_stack_call_func (arg=0x7ffc16edfa80, func=0x5cf4b0 <really_call_select>) at lisp.h:3791
> > #3  0x00000000005d006e in thread_select (func=<optimized out>, max_fds=max_fds@entry=19, rfds=rfds@entry=0x7ffc16edfb60, wfds=wfds@entry=0x7ffc16edfbe0, efds=efds@entry=0x0, timeout=timeout@entry=0x7ffc16ee0170, sigmask=0x0) at thread.c:624
> > #4  0x00000000005eb023 in xg_select (fds_lim=19, rfds=rfds@entry=0x7ffc16ee02a0, wfds=0x7ffc16ee0320, efds=<optimized out>, timeout=<optimized out>, sigmask=<optimized out>) at xgselect.c:131
> > #5  0x00000000005aeab4 in wait_reading_process_output (time_limit=time_limit@entry=30, nsecs=nsecs@entry=0, read_kbd=-1, do_display=do_display@entry=true, wait_for_cell=wait_for_cell@entry=0x0, wait_proc=wait_proc@entry=0x0, just_wait_proc=0) at process.c:5604
> > #6  0x00000000004253f8 in sit_for (timeout=timeout@entry=0x7a, reading=reading@entry=true, display_option=display_option@entry=1) at dispnew.c:6111
> > #7  0x00000000004fe415 in read_char (commandflag=commandflag@entry=1, map=map@entry=0x3184a63, p---Type <return> to continue, or q <return> to quit---
> > rev_event=<optimized out>, used_mouse_menu=used_mouse_menu@entry=0x7ffc16ee0b5b, end_time=end_time@entry=0x0) at keyboard.c:2742
> > #8  0x0000000000500841 in read_key_sequence (keybuf=keybuf@entry=0x7ffc16ee0c50, prompt=prompt@entry=0x0, dont_downcase_last=dont_downcase_last@entry=false, can_return_switch_frame=can_return_switch_frame@entry=true, fix_current_buffer=fix_current_buffer@entry=true, prevent_redisplay=prevent_redisplay@entry=false) at keyboard.c:9546
> > #9  0x0000000000502040 in command_loop_1 () at keyboard.c:1354
> > #10 0x000000000056a40e in internal_condition_case (bfun=bfun@entry=0x501e30 <command_loop_1>, handlers=handlers@entry=0x90, hfun=hfun@entry=0x4f8da0 <cmd_error>) at eval.c:1359
> 
> This says Emacs is simply waiting for input.
> 
> Are you saying Emacs doesn't respond to keyboard input in this state?

Yes. But once I could kill it straight with C-x c without any
questions or something.

It happens during eww call, not immediately but during. I could do 3
times C-g and get the error and then after nothing, I could not kill
buffer, could not quit, nothing but xkill

Now, last 3 attempts I can interrupt and I get keyboard control, I can
see half page loaded. And I can kill buffer.

I was thinking maybe ivy, but I turned it off, it is not ivy.

So if I just interrupt it during loading, I have no keyboard control,
but if I continue interrupting with C-g then half page appears and I
get keyboard control.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:41                                   ` Jean Louis
@ 2020-11-23 20:53                                     ` Andreas Schwab
  2020-11-23 21:09                                       ` Jean Louis
  2020-11-24  3:25                                       ` Eli Zaretskii
  0 siblings, 2 replies; 166+ messages in thread
From: Andreas Schwab @ 2020-11-23 20:53 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

On Nov 23 2020, Jean Louis wrote:

> It happens during eww call, not immediately but during.

That probably just means it is busy in libxml parsing the page.

Andreas.

-- 
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 19:59                             ` Arthur Miller
  2020-11-23 20:15                               ` Eli Zaretskii
@ 2020-11-23 20:53                               ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  1 sibling, 0 replies; 166+ messages in thread
From: Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors @ 2020-11-23 20:53 UTC (permalink / raw)
  To: Arthur Miller
  Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos,
	Eli Zaretskii

Arthur Miller <arthur.miller@live.com> writes:

> Andrea Corallo <akrl@sdf.org> writes:
>
>> I think would be nice to have a script that monitors Emacs memory
>> footprint and attach gdb on it when the memory usage is over a certain
>> (high) threshold.
>>
>> This way should be easy to see what we are doing because at that point
>> we are supposed to be allocating extremely often.
>>
>>   Andrea
> Indeed.

*not* very much tested:

<https://gitlab.com/koral/mem-watchdog.el/-/blob/master/mem-watchdog.el>

You can run an Emacs -Q where you use this to monitor the Emacs you are
working on (hopefully the first one does not crash too).  Note you have
to set the OS to allow for gdb to attach on other processes or run the
Emacs monitor as root.

Hope it helps.

  Andrea





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:53                                     ` Andreas Schwab
@ 2020-11-23 21:09                                       ` Jean Louis
  2020-11-24  3:25                                       ` Eli Zaretskii
  1 sibling, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-23 21:09 UTC (permalink / raw)
  To: Andreas Schwab; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

* Andreas Schwab <schwab@linux-m68k.org> [2020-11-23 23:53]:
> On Nov 23 2020, Jean Louis wrote:
> 
> > It happens during eww call, not immediately but during.
> 
> That probably just means it is busy in libxml parsing the page.

The instance without LD_PRELOAD is fast. Instance with LD_PRELOAD will
show me page but not allow any keyboard input unless I interrupt it
few times then few times. And there is no CPU activity going on that I
can see it on the indicator.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:04                                 ` Eli Zaretskii
@ 2020-11-23 21:12                                   ` Arthur Miller
  2020-11-24  2:07                                   ` Arthur Miller
  1 sibling, 0 replies; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 21:12 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

[-- Attachment #1: Type: text/plain, Size: 1967 bytes --]

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>>   carlos@redhat.com
>> Date: Mon, 23 Nov 2020 20:49:48 +0100
>> 
>> Isn't Valgrind good for this kind of problems? Can I run emacs as a
>> systemd service in Valgrind?
>
> You can run Emacs under Valgrind, see etc/DEBUG for the details.  But
> I'm not sure it will work as systemd service.
>
> Valgrind is only the right tool if we think there's a memory leak in
> Emacs itself.
Ok, I'll take a look at debug docs; It's ok, just i get a test I can run
it as normal process; it's ok.

Anyway I have tested heaptrack; It built in like few seconds, nothing
special there.

I am not sure about the tool; I think it missunderstands memory taken by
lisp environement as a leaked memory. It repports like heap loads of
leaks :-), so it must be that it just missunderstands Emacs. I am not
sure, I am attaching few screenshots, but I don't believe it can be that
many leaks as it rapports. It is just emacs what one gets from emacs -Q
there. I will attach the generated data too.

I had some problem with it too. I tried to attach it to a running deamon
process (started by sysd) and it failed untill I run it as sudo
user. As soon as it attached itself seems that both server and
emacsclient got completely unresponsive and stayed that way. I killed
client process, but windowed stayed alive, I had to kill it with
xkill. After I restarded server Emacs didn't read the init file, because
paths got messed up, so I had to sort that out too. Also the tool
produced empty rapport (it didn't work). But runnign on standalone emacs
process as a sudo user worked.

Anyway, despite problems it seems to be very nice graphical tool to see
call stack and how Emacs looks like internally; but I am not sure if it
works at all to find leaks in Emacs.


[-- Attachment #2: em-heaptrack1.png --]
[-- Type: image/png, Size: 214211 bytes --]

[-- Attachment #3: em-heaptrack2.png --]
[-- Type: image/png, Size: 50559 bytes --]

[-- Attachment #4: em-heaptrack3.png --]
[-- Type: image/png, Size: 302407 bytes --]

[-- Attachment #5: em-heaptrack4.png --]
[-- Type: image/png, Size: 67520 bytes --]

[-- Attachment #6: em-heaptrack5.png --]
[-- Type: image/png, Size: 310874 bytes --]

[-- Attachment #7: heaptrack.emacs.52042.zst --]
[-- Type: application/zstd, Size: 290761 bytes --]

^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:15                               ` Eli Zaretskii
@ 2020-11-23 21:15                                 ` Arthur Miller
  0 siblings, 0 replies; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 21:15 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos, akrl

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: Eli Zaretskii <eliz@gnu.org>,  fweimer@redhat.com,
>>   43389@debbugs.gnu.org,  bugs@gnu.support,  dj@redhat.com,
>>   michael_heerdegen@web.de,  trevor@trevorbentley.com,  carlos@redhat.com
>> Date: Mon, 23 Nov 2020 20:59:21 +0100
>> 
>> How hard/possible is to use this tool in Emacs:
>> 
>> https://gperftools.github.io/gperftools/heapprofile.html
>
> AFAIU, this cannot be used with glibc's malloc, it needs libtcmalloc
> instead.
Oh yes I understand, there is not a chance it would help to run emacs on
tcmalloc instead of standard malloc? If there is by a chance a leak
somewhere in Emacs? ... god forbid of course :-)





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:31                                 ` Jean Louis
@ 2020-11-23 21:22                                   ` Arthur Miller
  2020-11-24  5:29                                     ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-23 21:22 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

Jean Louis <bugs@gnu.support> writes:

> * Arthur Miller <arthur.miller@live.com> [2020-11-23 23:22]:
>> The only thing that changed regularly was of course system updates: kernel,
>> gcc & co etc. So it maybe is as mentioned earlier in this thread by
>> either you or somebody else is that glibc changed and that maybe
>> triggers something in Emacs based on how Emacs use it. I don't know I am
>> not expert in this. Isn't Valgrind good for this kind of problems? Can I
>> run emacs as a systemd service in Valgrind?
>
> I did not change anything like glibc or kernel in Hyperbola
> GNU/Linux-libre
Didn't you update your system since last summer?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 15:46                 ` Eli Zaretskii
  2020-11-23 17:29                   ` Arthur Miller
  2020-11-23 18:33                   ` Jean Louis
@ 2020-11-23 21:30                   ` Trevor Bentley
  2020-11-23 22:11                     ` Trevor Bentley
  2020-11-24 16:07                     ` Eli Zaretskii
  2 siblings, 2 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-11-23 21:30 UTC (permalink / raw)
  To: Eli Zaretskii, Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, carlos


Ah geez, there's a dozen threads now.  I'll just start from here.

I haven't setup the memory trace lib yet, but I've been running an 
instance of emacs and printing as much as I can about its memory 
usage, including (malloc-info).  I reduced MALLOC_ARENA_MAX to 2.

This instance sat around at ~300MB for a day, then spiked to 
1000MB.  I ran a bunch of memory-related functions, and it stopped 
growing.  I believe (garbage-collect) halted the growth.

It ran for another 3 days at ~1100MB until another sudden spike up 
to 2300MB.

As usual, this is a graphical instance running emacs-slack with 
tons of network traffic and images and the such.

In the mean time, while that instance was running, a second 
graphical instance suddenly spiked to 4100MB.  The other instance 
is interesting, as it's not doing anything special at all.  It has 
a few elisp files open, and reports only 700KB of buffers and 
42.2MB in elisp data.

A third graphical instance has been idling during this time.  I've 
never done a single thing with it beyond start it.  That one is 
still at 83MB.

Below is a large memory report from the emacs-slack instance:

----------------
BEGIN LOG
----------------
;;--------------------------------------  ;; one day of runtime
;; growing 1MB every few seconds RSS 1100MB 
;; -------------------------------------- 
;; --------------------------------------  (getenv 
;; "MALLOC_ARENA_MAX") "2"  buffers ~= 60MB (let ((size 0)) 
  (dolist (buffer (buffer-list) size) 
    (setq size (+ size (buffer-size buffer))))) 
60300462  ;; sums to ~100MB if I'm reading it right? 
(garbage-collect) ((conses 16 1143686 1675416) (symbols 48 32466 
160) (strings 32 241966 542675) (string-bytes 1 5872840) (vectors 
16 116994) (vector-slots 8 8396419 357942) (floats 8 1705 7024) 
(intervals 56 27139 10678) (buffers 992 53))  ;; /proc/$PID/smaps 
heap 56395d707000-56399b330000 rw-p 00000000 00:00 0 
[heap] Size:            1011876 kB KernelPageSize:        4 kB 
MMUPageSize:           4 kB Rss:             1010948 kB Pss: 
1010948 kB Shared_Clean:          0 kB Shared_Dirty:          0 kB 
Private_Clean:         0 kB Private_Dirty:   1010948 kB 
Referenced:      1007016 kB Anonymous:       1010948 kB LazyFree: 
0 kB AnonHugePages:         0 kB ShmemPmdMapped:        0 kB 
FilePmdMapped:        0 kB Shared_Hugetlb:        0 kB 
Private_Hugetlb:       0 kB Swap:                  0 kB SwapPss: 
0 kB Locked:                0 kB THPeligible:            0 
ProtectionKey:         0  ;; malloc-info <malloc version="1"> 
<heap nr="0"> <sizes> 
  <size from="17" to="32" total="64" count="2"/> <size from="33" 
  to="48" total="192" count="4"/> <size from="33" to="33" 
  total="56826" count="1722"/> <size from="49" to="49" 
  total="16121" count="329"/> <size from="65" to="65" 
  total="567970" count="8738"/> <size from="81" to="81" 
  total="38070" count="470"/> <size from="97" to="97" 
  total="80122" count="826"/> <size from="113" to="113" 
  total="37629" count="333"/> <size from="129" to="129" 
  total="435117" count="3373"/> <size from="145" to="145" 
  total="44805" count="309"/> <size from="161" to="161" 
  total="111090" count="690"/> <size from="177" to="177" 
  total="35577" count="201"/> <size from="193" to="193" 
  total="293553" count="1521"/> <size from="209" to="209" 
  total="33858" count="162"/> <size from="225" to="225" 
  total="66600" count="296"/> <size from="241" to="241" 
  total="35909" count="149"/> <size from="257" to="257" 
  total="179900" count="700"/> <size from="273" to="273" 
  total="28938" count="106"/> <size from="289" to="289" 
  total="48841" count="169"/> <size from="305" to="305" 
  total="21655" count="71"/> <size from="321" to="321" 
  total="127758" count="398"/> <size from="337" to="337" 
  total="20220" count="60"/> <size from="353" to="353" 
  total="37065" count="105"/> <size from="369" to="369" 
  total="28044" count="76"/> <size from="385" to="385" 
  total="90860" count="236"/> <size from="401" to="401" 
  total="21253" count="53"/> <size from="417" to="417" 
  total="51291" count="123"/> <size from="433" to="433" 
  total="21217" count="49"/> <size from="449" to="449" 
  total="77228" count="172"/> <size from="465" to="465" 
  total="19995" count="43"/> <size from="481" to="481" 
  total="32227" count="67"/> <size from="497" to="497" 
  total="19383" count="39"/> <size from="513" to="513" 
  total="63099" count="123"/> <size from="529" to="529" 
  total="14283" count="27"/> <size from="545" to="545" 
  total="31065" count="57"/> <size from="561" to="561" 
  total="23001" count="41"/> <size from="577" to="577" 
  total="50199" count="87"/> <size from="593" to="593" 
  total="18383" count="31"/> <size from="609" to="609" 
  total="38367" count="63"/> <size from="625" to="625" 
  total="21875" count="35"/> <size from="641" to="641" 
  total="39101" count="61"/> <size from="657" to="657" 
  total="28251" count="43"/> <size from="673" to="673" 
  total="30958" count="46"/> <size from="689" to="689" 
  total="19292" count="28"/> <size from="705" to="705" 
  total="38070" count="54"/> <size from="721" to="721" 
  total="12978" count="18"/> <size from="737" to="737" 
  total="33902" count="46"/> <size from="753" to="753" 
  total="20331" count="27"/> <size from="769" to="769" 
  total="33067" count="43"/> <size from="785" to="785" 
  total="18840" count="24"/> <size from="801" to="801" 
  total="29637" count="37"/> <size from="817" to="817" 
  total="17157" count="21"/> <size from="833" to="833" 
  total="35819" count="43"/> <size from="849" to="849" 
  total="16131" count="19"/> <size from="865" to="865" 
  total="21625" count="25"/> <size from="881" to="881" 
  total="14977" count="17"/> <size from="897" to="897" 
  total="31395" count="35"/> <size from="913" to="913" 
  total="18260" count="20"/> <size from="929" to="929" 
  total="37160" count="40"/> <size from="945" to="945" 
  total="28350" count="30"/> <size from="961" to="961" 
  total="40362" count="42"/> <size from="977" to="977" 
  total="30287" count="31"/> <size from="993" to="993" 
  total="43692" count="44"/> <size from="1009" to="1009" 
  total="1426726" count="1414"/> <size from="1025" to="1073" 
  total="1167589" count="1093"/> <size from="1089" to="1137" 
  total="1370809" count="1209"/> <size from="1153" to="1201" 
  total="723005" count="605"/> <size from="1217" to="1265" 
  total="467988" count="372"/> <size from="1281" to="1329" 
  total="258180" count="196"/> <size from="1345" to="1393" 
  total="128221" count="93"/> <size from="1409" to="1457" 
  total="143844" count="100"/> <size from="1473" to="1521" 
  total="129078" count="86"/> <size from="1537" to="1585" 
  total="93980" count="60"/> <size from="1601" to="1649" 
  total="108995" count="67"/> <size from="1665" to="1713" 
  total="98218" count="58"/> <size from="1729" to="1777" 
  total="121253" count="69"/> <size from="1793" to="1841" 
  total="110877" count="61"/> <size from="1857" to="1905" 
  total="92257" count="49"/> <size from="1921" to="1969" 
  total="83691" count="43"/> <size from="1985" to="2033" 
  total="235973" count="117"/> <size from="2049" to="2097" 
  total="213783" count="103"/> <size from="2113" to="2161" 
  total="653793" count="305"/> <size from="2177" to="2225" 
  total="682581" count="309"/> <size from="2241" to="2289" 
  total="260931" count="115"/> <size from="2305" to="2337" 
  total="109375" count="47"/> <size from="2369" to="2417" 
  total="88789" count="37"/> <size from="2433" to="2481" 
  total="83378" count="34"/> <size from="2497" to="2545" 
  total="98263" count="39"/> <size from="2561" to="2609" 
  total="77438" count="30"/> <size from="2657" to="2673" 
  total="42656" count="16"/> <size from="2689" to="2737" 
  total="48754" count="18"/> <size from="2753" to="2801" 
  total="63879" count="23"/> <size from="2817" to="2865" 
  total="62422" count="22"/> <size from="2881" to="2929" 
  total="57988" count="20"/> <size from="2945" to="2993" 
  total="68247" count="23"/> <size from="3009" to="3057" 
  total="133164" count="44"/> <size from="3073" to="3121" 
  total="397169" count="129"/> <size from="3137" to="3569" 
  total="2008020" count="612"/> <size from="3585" to="4081" 
  total="666716" count="172"/> <size from="4097" to="4593" 
  total="7549855" count="1775"/> <size from="4609" to="5105" 
  total="2643468" count="540"/> <size from="5121" to="5617" 
  total="5882607" count="1103"/> <size from="5633" to="6129" 
  total="2430783" count="415"/> <size from="6145" to="6641" 
  total="3494147" count="547"/> <size from="6657" to="7153" 
  total="2881062" count="422"/> <size from="7169" to="7665" 
  total="5880630" count="790"/> <size from="7681" to="8177" 
  total="2412798" count="302"/> <size from="8193" to="8689" 
  total="11000664" count="1320"/> <size from="8705" to="9201" 
  total="4458714" count="490"/> <size from="9217" to="9713" 
  total="4959696" count="528"/> <size from="9729" to="10225" 
  total="6223631" count="623"/> <size from="10241" to="10737" 
  total="3347537" count="321"/> <size from="10753" to="12273" 
  total="7665386" count="666"/> <size from="12289" to="16369" 
  total="37137026" count="2658"/> <size from="16385" to="20465" 
  total="26637896" count="1496"/> <size from="20481" to="24561" 
  total="17043773" count="765"/> <size from="24593" to="28657" 
  total="15934986" count="602"/> <size from="28673" to="32753" 
  total="21737575" count="711"/> <size from="32769" to="36849" 
  total="17276544" count="496"/> <size from="36865" to="40945" 
  total="14702299" count="379"/> <size from="40961" to="65521" 
  total="53337460" count="1044"/> <size from="65585" to="98289" 
  total="51364750" count="654"/> <size from="98369" to="131057" 
  total="27361507" count="243"/> <size from="131121" to="163665" 
  total="27275915" count="187"/> <size from="163841" to="262129" 
  total="63020958" count="302"/> <size from="262145" to="519809" 
  total="126431823" count="351"/> <size from="525073" to="4639665" 
  total="148733598" count="174"/> <unsorted from="18465" 
  to="18465" total="18465" count="1"/> 
</sizes> <total type="fast" count="6" size="256"/> <total 
type="rest" count="50540" size="735045803"/> <system 
type="current" size="1036161024"/> <system type="max" 
size="1036161024"/> <aspace type="total" size="1036161024"/> 
<aspace type="mprotect" size="1036161024"/> </heap> <heap nr="1"> 
<sizes> 
  <size from="33" to="33" total="231" count="7"/> <size from="49" 
  to="49" total="245" count="5"/> <size from="65" to="65" 
  total="260" count="4"/> <size from="81" to="81" total="243" 
  count="3"/> <size from="97" to="97" total="97" count="1"/> <size 
  from="113" to="113" total="113" count="1"/> <size from="129" 
  to="129" total="516" count="4"/> <size from="161" to="161" 
  total="644" count="4"/> <size from="209" to="209" total="418" 
  count="2"/> <size from="241" to="241" total="241" count="1"/> 
  <size from="257" to="257" total="257" count="1"/> <size 
  from="305" to="305" total="610" count="2"/> <size from="705" 
  to="705" total="705" count="1"/> <size from="1294673" 
  to="3981489" total="7995027" count="3"/> <unsorted from="30561" 
  to="4013649" total="4044210" count="2"/> 
</sizes> <total type="fast" count="0" size="0"/> <total 
type="rest" count="42" size="20184569"/> <system type="current" 
size="20250624"/> <system type="max" size="20250624"/> <aspace 
type="total" size="20250624"/> <aspace type="mprotect" 
size="20250624"/> <aspace type="subheaps" size="1"/> </heap> 
<total type="fast" count="6" size="256"/> <total type="rest" 
count="50582" size="755230372"/> <total type="mmap" count="4" 
size="44789760"/> <system type="current" size="1056411648"/> 
<system type="max" size="1056411648"/> <aspace type="total" 
size="1056411648"/> <aspace type="mprotect" size="1056411648"/> 
</malloc>     ;;-------------------------------------- 
;;-------------------------------------- ;; ~3 hours later.  ;; 
growth slowed after the previous (garbage-collect) ;; RSS 1140MB 
;;-------------------------------------- 
;;--------------------------------------  (memory-limit) ;; 
virtual memory, not RSS 1429620 (message "%f" gc-cons-threshold) 
"800000.000000" (message "%f" gc-cons-percentage) "0.100000" 
(emacs-uptime) "1 day, 4 hours, 50 minutes, 30 seconds" (message 
"%f" gcs-done) "708.000000" (message "%f" gc-elapsed) "201.724018" 
(message "%s" memory-full) "nil"  (memory-use-counts) (224118465 
575286 217714299 65607 946347937 563190 26430775)  (memory-usage) 
((conses 16 1199504 2511807) (symbols 48 32742 159) (strings 32 
246671 575263) (string-bytes 1 5992063) (vectors 16 118364) 
(vector-slots 8 8412872 474129) (floats 8 1771 10028) (intervals 
56 29873 12035) (buffers 992 60)) 
 
 =>	18.3MB (+ 38.3MB dead) in conses 
	1.50MB (+ 7.45kB dead) in symbols 7.53MB (+ 17.6MB dead) in 
	strings 5.71MB in string-bytes 1.81MB in vectors 64.2MB (+ 
	3.62MB dead) in vector-slots 13.8kB (+ 78.3kB dead) in floats 
	1.60MB (+  658kB dead) in intervals 58.1kB in buffers  Total in 
	lisp objects:  161MB (live  101MB, dead 60.2MB)  Buffer ralloc 
	memory usage: 60 buffers 64.4MB total ( 956kB in gaps) 
      Size	Gap	Name 
 
  47795241	745530	 *censored* 
   4681196	29261	   *censored* 4543324	25017	   *censored* 
   4478601	28398	   *censored* 
    862373	622	     *censored* 859981	4898	   *censored* 859617 
    3696	   *censored* 859355	4131	   *censored* 859131	4009 
    *censored* 471538	6609	   *censored* 
     60099	6451	   *censored* 20589	1312	   *censored* 19452 
     2129	   *censored* 17776	1746	   *censored* 16877	217 
     *censored* 16484	1447	   *censored* 13488	56 
     *censored* 13212	1810	   *censored* 12747	2081 
     *censored* 12640	2098	   *censored* 12478	900 
     *censored* 12130	453	     *censored* 10745	10186 
     *censored* 10703	2082	   *censored* 
      9965	474	     *censored* 9828	1075	   *censored* 8000 
      226	     *censored* 5117	1396	   *censored* 4282	1891 
      *censored* 2546	1544	   *censored* 1630	675 
      *censored* 1479	591	     *censored* 1228	918 
      *censored* 
       883	1280	   *censored* 679	1574	   *censored* 678	5483 
       *censored* 513	27194	   *censored* 299	1731	   *censored* 
       232	3839	   *censored* 131	1985	   *censored* 
        97	1935	   *censored* 92	1979	   *censored* 72	1999 
        *censored* 69	1999	   *censored* 69	4009	   *censored* 
        67	1999	   *censored* 64	1985	   *censored* 62	6034 
        *censored* 62	1999	   *censored* 61	1960	   *censored* 
        28	4030	   *censored* 27	1999	   *censored* 
         0	2026	   *censored* 0	20	     *censored* 0	2065 
         *censored* 0	2072	   *censored* 0	20	     *censored* 0 
         20	     *censored* 0	2059	   *censored* 0	2037 
         *censored* 
 
 
 
;;--------------------------------------  ;; 3 days later ;; RSS 
;;--------------------------------------was steady at 1150MB ;; 
;;--------------------------------------leaped to 2.3GB very 
;;--------------------------------------suddenly ;; ;; RSS 2311M 
;;--------------------------------------;; ~182MB (let ((size 0)) 
  (dolist (buffer (buffer-list) size) 
    (setq size (+ size (buffer-size buffer))))) 
182903045   ;; sums to ~142MB if I'm reading it right? 
(garbage-collect) ((conses 16 2081486 2630206) (symbols 48 61019 
79) (strings 32 353371 288980) (string-bytes 1 13294206) (vectors 
16 144742) (vector-slots 8 9503757 592939) (floats 8 2373 8320) 
(intervals 56 46660 10912) (buffers 992 82))  (reduce '+ (cl-loop 
for thing in (garbage-collect) 
                    collect (* (nth 1 thing) (nth 2 thing)))) 
142115406  ;; /proc/$PID/smaps heap 56395d707000-5639e0d43000 rw-p 
00000000 00:00 0                          [heap] Size: 
2152688 kB KernelPageSize:        4 kB MMUPageSize:           4 kB 
Rss:             2152036 kB Pss:             2152036 kB 
Shared_Clean:          0 kB Shared_Dirty:          0 kB 
Private_Clean:         0 kB Private_Dirty:   2152036 kB 
Referenced:      2146588 kB Anonymous:       2152036 kB LazyFree: 
0 kB AnonHugePages:         0 kB ShmemPmdMapped:        0 kB 
FilePmdMapped:        0 kB Shared_Hugetlb:        0 kB 
Private_Hugetlb:       0 kB Swap:                  0 kB SwapPss: 
0 kB Locked:                0 kB THPeligible:            0 
ProtectionKey:         0   ;; malloc-info (malloc-info) <malloc 
version="1"> <heap nr="0"> <sizes> 
  <size from="33" to="48" total="240" count="5"/> <size from="113" 
  to="128" total="128" count="1"/> <size from="129" to="129" 
  total="26961" count="209"/> <size from="145" to="145" 
  total="112230" count="774"/> <size from="161" to="161" 
  total="4830" count="30"/> <size from="177" to="177" 
  total="66375" count="375"/> <size from="193" to="193" 
  total="159804" count="828"/> <size from="209" to="209" 
  total="6897" count="33"/> <size from="225" to="225" 
  total="82800" count="368"/> <size from="241" to="241" 
  total="48923" count="203"/> <size from="257" to="257" 
  total="119505" count="465"/> <size from="273" to="273" 
  total="47775" count="175"/> <size from="289" to="289" 
  total="73984" count="256"/> <size from="305" to="305" 
  total="33855" count="111"/> <size from="321" to="321" 
  total="147660" count="460"/> <size from="337" to="337" 
  total="33700" count="100"/> <size from="353" to="353" 
  total="73424" count="208"/> <size from="369" to="369" 
  total="5166" count="14"/> <size from="385" to="385" 
  total="94325" count="245"/> <size from="401" to="401" 
  total="44511" count="111"/> <size from="417" to="417" 
  total="67971" count="163"/> <size from="433" to="433" 
  total="31176" count="72"/> <size from="449" to="449" 
  total="88004" count="196"/> <size from="465" to="465" 
  total="33480" count="72"/> <size from="481" to="481" 
  total="86580" count="180"/> <size from="497" to="497" 
  total="36778" count="74"/> <size from="513" to="513" 
  total="108243" count="211"/> <size from="529" to="529" 
  total="15341" count="29"/> <size from="545" to="545" 
  total="64310" count="118"/> <size from="561" to="561" 
  total="28050" count="50"/> <size from="577" to="577" 
  total="76741" count="133"/> <size from="593" to="593" 
  total="40917" count="69"/> <size from="609" to="609" 
  total="77343" count="127"/> <size from="625" to="625" 
  total="30000" count="48"/> <size from="641" to="641" 
  total="164737" count="257"/> <size from="657" to="657" 
  total="35478" count="54"/> <size from="673" to="673" 
  total="44418" count="66"/> <size from="689" to="689" 
  total="4134" count="6"/> <size from="705" to="705" total="86010" 
  count="122"/> <size from="721" to="721" total="35329" 
  count="49"/> <size from="737" to="737" total="63382" 
  count="86"/> <size from="753" to="753" total="45933" 
  count="61"/> <size from="769" to="769" total="85359" 
  count="111"/> <size from="785" to="785" total="51810" 
  count="66"/> <size from="801" to="801" total="191439" 
  count="239"/> <size from="817" to="817" total="42484" 
  count="52"/> <size from="833" to="833" total="7497" count="9"/> 
  <size from="849" to="849" total="5094" count="6"/> <size 
  from="865" to="865" total="4325" count="5"/> <size from="881" 
  to="881" total="5286" count="6"/> <size from="897" to="897" 
  total="6279" count="7"/> <size from="913" to="913" total="6391" 
  count="7"/> <size from="929" to="929" total="4645" count="5"/> 
  <size from="945" to="945" total="3780" count="4"/> <size 
  from="961" to="961" total="1922" count="2"/> <size from="977" 
  to="977" total="9770" count="10"/> <size from="1009" to="1009" 
  total="122089" count="121"/> <size from="1025" to="1073" 
  total="156226" count="146"/> <size from="1089" to="1137" 
  total="148084" count="132"/> <size from="1153" to="1201" 
  total="75664" count="64"/> <size from="1217" to="1265" 
  total="83731" count="67"/> <size from="1281" to="1329" 
  total="101437" count="77"/> <size from="1345" to="1393" 
  total="107822" count="78"/> <size from="1409" to="1457" 
  total="91680" count="64"/> <size from="1473" to="1521" 
  total="51074" count="34"/> <size from="1537" to="1585" 
  total="65482" count="42"/> <size from="1601" to="1649" 
  total="32484" count="20"/> <size from="1665" to="1713" 
  total="50638" count="30"/> <size from="1729" to="1777" 
  total="33283" count="19"/> <size from="1793" to="1825" 
  total="18106" count="10"/> <size from="1857" to="1905" 
  total="35683" count="19"/> <size from="1921" to="1969" 
  total="117132" count="60"/> <size from="1985" to="2033" 
  total="46295" count="23"/> <size from="2049" to="2097" 
  total="257804" count="124"/> <size from="2113" to="2161" 
  total="92075" count="43"/> <size from="2177" to="2225" 
  total="39666" count="18"/> <size from="2241" to="2289" 
  total="81972" count="36"/> <size from="2305" to="2353" 
  total="337953" count="145"/> <size from="2369" to="2417" 
  total="399879" count="167"/> <size from="2433" to="2481" 
  total="555635" count="227"/> <size from="2497" to="2545" 
  total="372660" count="148"/> <size from="2561" to="2609" 
  total="431415" count="167"/> <size from="2625" to="2673" 
  total="325771" count="123"/> <size from="2689" to="2737" 
  total="412584" count="152"/> <size from="2753" to="2801" 
  total="335673" count="121"/> <size from="2817" to="2865" 
  total="235587" count="83"/> <size from="2881" to="2929" 
  total="283890" count="98"/> <size from="2945" to="2993" 
  total="335073" count="113"/> <size from="3009" to="3057" 
  total="278876" count="92"/> <size from="3073" to="3121" 
  total="358180" count="116"/> <size from="3137" to="3569" 
  total="2372709" count="709"/> <size from="3585" to="4081" 
  total="1847856" count="480"/> <size from="4097" to="4593" 
  total="5672856" count="1320"/> <size from="4609" to="5105" 
  total="4675836" count="956"/> <size from="5121" to="5617" 
  total="6883318" count="1286"/> <size from="5633" to="6129" 
  total="6011919" count="1023"/> <size from="6145" to="6641" 
  total="6239871" count="975"/> <size from="6657" to="7153" 
  total="6540165" count="949"/> <size from="7169" to="7665" 
  total="5515848" count="744"/> <size from="7681" to="8177" 
  total="5148216" count="648"/> <size from="8193" to="8689" 
  total="8190223" count="975"/> <size from="8705" to="9201" 
  total="5854315" count="651"/> <size from="9217" to="9713" 
  total="5312354" count="562"/> <size from="9729" to="10225" 
  total="5154212" count="516"/> <size from="10241" to="10737" 
  total="4074005" count="389"/> <size from="10753" to="12273" 
  total="11387550" count="990"/> <size from="12289" to="16369" 
  total="32661229" count="2317"/> <size from="16385" to="20465" 
  total="36652437" count="2037"/> <size from="20481" to="24561" 
  total="21272131" count="947"/> <size from="24577" to="28657" 
  total="25462302" count="958"/> <size from="28673" to="32753" 
  total="28087234" count="914"/> <size from="32769" to="36849" 
  total="39080113" count="1121"/> <size from="36865" to="40945" 
  total="30141527" count="775"/> <size from="40961" to="65521" 
  total="166092799" count="3119"/> <size from="65537" to="98289" 
  total="218425380" count="2692"/> <size from="98321" to="131057" 
  total="178383171" count="1555"/> <size from="131089" to="163825" 
  total="167800886" count="1142"/> <size from="163841" to="262065" 
  total="367649915" count="1819"/> <size from="262161" to="522673" 
  total="185347984" count="560"/> <size from="525729" 
  to="30878897" total="113322865" count="97"/> <unsorted from="33" 
  to="33" total="33" count="1"/> 
</sizes> <total type="fast" count="6" size="368"/> <total 
type="rest" count="43944" size="1713595767"/> <system 
type="current" size="2204352512"/> <system type="max" 
size="2204352512"/> <aspace type="total" size="2204352512"/> 
<aspace type="mprotect" size="2204352512"/> </heap> <heap nr="1"> 
<sizes> 
  <size from="17" to="32" total="160" count="5"/> <size from="33" 
  to="48" total="336" count="7"/> <size from="49" to="64" 
  total="448" count="7"/> <size from="65" to="80" total="560" 
  count="7"/> <size from="97" to="112" total="784" count="7"/> 
  <size from="33" to="33" total="231" count="7"/> <size from="49" 
  to="49" total="245" count="5"/> <size from="65" to="65" 
  total="390" count="6"/> <size from="81" to="81" total="162" 
  count="2"/> <size from="97" to="97" total="97" count="1"/> <size 
  from="113" to="113" total="113" count="1"/> <size from="129" 
  to="129" total="516" count="4"/> <size from="161" to="161" 
  total="644" count="4"/> <size from="209" to="209" total="2299" 
  count="11"/> <size from="241" to="241" total="241" count="1"/> 
  <size from="257" to="257" total="257" count="1"/> <size 
  from="305" to="305" total="610" count="2"/> <size from="32209" 
  to="32209" total="64418" count="2"/> <size from="1294673" 
  to="4053073" total="27998472" count="8"/> <unsorted from="209" 
  to="4053073" total="4080781" count="13"/> 
</sizes> <total type="fast" count="33" size="2288"/> <total 
type="rest" count="69" size="42357748"/> <system type="current" 
size="42426368"/> <system type="max" size="42426368"/> <aspace 
type="total" size="42426368"/> <aspace type="mprotect" 
size="42426368"/> <aspace type="subheaps" size="1"/> </heap> 
<total type="fast" count="39" size="2656"/> <total type="rest" 
count="44013" size="1755953515"/> <total type="mmap" count="6" 
size="121565184"/> <system type="current" size="2246778880"/> 
<system type="max" size="2246778880"/> <aspace type="total" 
size="2246778880"/> <aspace type="mprotect" size="2246778880"/> 
</malloc>   (memory-limit) ;; virtual memory, not RSS 2630768 
(message "%f" gc-cons-threshold) "800000.000000"  (message "%f" 
gc-cons-percentage) "0.100000"  (emacs-uptime) "4 days, 4 hours, 5 
minutes, 3 seconds"  (message "%f" gcs-done) "2140.000000" 
(message "%f" gc-elapsed) "760.624580"  (message "%s" memory-full) 
"nil"  ;; I belive this is cumulative, not current? 
(memory-use-counts) (989044259 2763760 754240919 143568 2633617972 
2535567 76512576)  (reduce '+ (memory-use-counts)) 4509544031 
 
  
(memory-usage) ((conses 16 2081326 3094498) (symbols 48 61019 79) 
(strings 32 353291 494869) (string-bytes 1 13286757) (vectors 16 
144725) (vector-slots 8 9503378 623467) (floats 8 2373 8320) 
(intervals 56 46640 11652) (buffers 992 82)) 
 
 =>	31.8MB (+ 47.2MB dead) in conses 
	2.79MB (+ 3.70kB dead) in symbols 10.8MB (+ 15.1MB dead) in 
	strings 12.7MB in string-bytes 2.21MB in vectors 72.5MB (+ 
	4.76MB dead) in vector-slots 18.5kB (+ 65.0kB dead) in floats 
	2.49MB (+  637kB dead) in intervals 79.4kB in buffers  Total in 
	lisp objects:  203MB (live  135MB, dead 67.8MB)  Buffer ralloc 
	memory usage: 82 buffers 
 176MB total (2.04MB in gaps) 
      Size	Gap	Name 
 
  91928037	1241610	*censored* 27233492	123915	*censored* 
  16165441	173855	*censored* 15789683	66347	  *censored* 
  15688792	205051	*censored* 
   3040510	1437	  *censored* 3030476	17503	  *censored* 3027663 
   15314	  *censored* 3027493	16032	  *censored* 3026818	15601 
   *censored* 
    211934	5198	  *censored* 
     87685	23923	  *censored* 57762	2629	  *censored* 52780 
     677	    *censored* 35991	2269	  *censored* 25403	1824 
     *censored* 18008	1514	  *censored* 16930	64	    *censored* 
     16877	217	    *censored* 16484	1447	  *censored* 14232 
     14654	  *censored* 14192	605	    *censored* 13715	1130 
     *censored* 13575	1689	  *censored* 13343	1377	  *censored* 
     13198	1540	  *censored* 13178	1598	  *censored* 12747 
     2081	  *censored* 10883	1902	  *censored* 10271	632 
     *censored* 
      6402	44449	  *censored* 5127	1386	  *censored* 5005	1156 
      *censored* 4282	1891	  *censored* 3840	2313	  *censored* 
      3409	16717	  *censored* 3409	16717	  *censored* 2872	1186 
      *censored* 2541	1511	  *censored* 2067	2011	  *censored* 
      1630	675	    *censored* 1626	444	    *censored* 1490	679 
      *censored* 1413	26294	  *censored* 1159	4937	  *censored* 
       962	1063	  *censored* 678	1574	  *censored* 562	2297 
       *censored* 324	2008	  *censored* 324	2008	  *censored* 
       151	1967	  *censored* 137	1887	  *censored* 133	1983 
       *censored* 
        97	1935	  *censored* 78	3998	  *censored* 72	1999 
        *censored* 71	3985	  *censored* 69	1999	  *censored* 67 
        1999	  *censored* 64	1985	  *censored* 62	1999 
        *censored* 61	6035	  *censored* 49	2008	  *censored* 33 
        2038	  *censored* 31	4040	  *censored* 27	1999 
        *censored* 25	1999	  *censored* 25	1999	  *censored* 25 
        1999	  *censored* 22	1999	  *censored* 20	0 
        *censored* 16	2021	  *censored* 16	4	      *censored* 
         0	2026	  *censored* 0	20	    *censored* 0	5026 
         *censored* 0	2072	  *censored* 0	20	    *censored* 0 
         20	    *censored* 0	2059	  *censored* 0	20 
         *censored* 0	20	    *censored* 

----------------
END LOG
----------------

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 21:30                   ` Trevor Bentley
@ 2020-11-23 22:11                     ` Trevor Bentley
  2020-11-24 16:07                     ` Eli Zaretskii
  1 sibling, 0 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-11-23 22:11 UTC (permalink / raw)
  To: Eli Zaretskii, Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, carlos

Trevor Bentley <trevor@trevorbentley.com> writes:
 
> Below is a large memory report from the emacs-slack instance: 

Formatting was butchered.  Try this:

https://trevorbentley.com/emacs_malloc_info.log

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:04                                 ` Eli Zaretskii
  2020-11-23 21:12                                   ` Arthur Miller
@ 2020-11-24  2:07                                   ` Arthur Miller
  1 sibling, 0 replies; 166+ messages in thread
From: Arthur Miller @ 2020-11-24  2:07 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor, carlos

Eli Zaretskii <eliz@gnu.org> writes:

>> From: Arthur Miller <arthur.miller@live.com>
>> Cc: bugs@gnu.support,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>>   dj@redhat.com,  michael_heerdegen@web.de,  trevor@trevorbentley.com,
>>   carlos@redhat.com
>> Date: Mon, 23 Nov 2020 20:49:48 +0100
>> 
>> Isn't Valgrind good for this kind of problems? Can I run emacs as a
>> systemd service in Valgrind?
>
> You can run Emacs under Valgrind, see etc/DEBUG for the details.  But
> I'm not sure it will work as systemd service.
>
> Valgrind is only the right tool if we think there's a memory leak in
> Emacs itself.
Yeah, you are right;

I was trying to crash my Emacs for like 4 hours now, I tried to simulate
dired and copying/moving around files since I experienced crashes mostly
when in dired and helm; I put a function on a timer where I made 1000 files every
few seconds, red those files back inoto lists, copy them around and deleted
them; and watched allocations and all I got was spent time; Emacs was
rock solid. Typical :D.

I hope that this pmem for the process is correct; I was looking at
attributes and I saw it go up and down, but it seemed to stay in reange
~2.5 tp ~3.5%, 

This looked typical, pmem was different for every run, but stayed below
3.5%

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 683640 125000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 684725 570000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 685810 502000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 686711 538000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 687465 69000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

I will see it comes back, and see if I can play more with it; I give up for now.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 20:53                                     ` Andreas Schwab
  2020-11-23 21:09                                       ` Jean Louis
@ 2020-11-24  3:25                                       ` Eli Zaretskii
  1 sibling, 0 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-24  3:25 UTC (permalink / raw)
  To: Andreas Schwab
  Cc: fweimer, 43389, bugs, dj, carlos, trevor, michael_heerdegen

> From: Andreas Schwab <schwab@linux-m68k.org>
> Cc: Eli Zaretskii <eliz@gnu.org>,  fweimer@redhat.com,
>   43389@debbugs.gnu.org,  dj@redhat.com,  carlos@redhat.com,
>   trevor@trevorbentley.com,  michael_heerdegen@web.de
> Date: Mon, 23 Nov 2020 21:53:22 +0100
> 
> On Nov 23 2020, Jean Louis wrote:
> 
> > It happens during eww call, not immediately but during.
> 
> That probably just means it is busy in libxml parsing the page.

That's not what the backtrace is showing, though.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 21:22                                   ` Arthur Miller
@ 2020-11-24  5:29                                     ` Jean Louis
  2020-11-24  8:15                                       ` Arthur Miller
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-24  5:29 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Arthur Miller <arthur.miller@live.com> [2020-11-24 00:23]:
> Jean Louis <bugs@gnu.support> writes:
> 
> > * Arthur Miller <arthur.miller@live.com> [2020-11-23 23:22]:
> >> The only thing that changed regularly was of course system updates: kernel,
> >> gcc & co etc. So it maybe is as mentioned earlier in this thread by
> >> either you or somebody else is that glibc changed and that maybe
> >> triggers something in Emacs based on how Emacs use it. I don't know I am
> >> not expert in this. Isn't Valgrind good for this kind of problems? Can I
> >> run emacs as a systemd service in Valgrind?
> >
> > I did not change anything like glibc or kernel in Hyperbola
> > GNU/Linux-libre
> Didn't you update your system since last summer?

I am pulling Emacs from git and consider system upgraded that way.

For system packages, pacman says there is nothing to do most of time,
unless there is new kernel or some security issue.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24  5:29                                     ` Jean Louis
@ 2020-11-24  8:15                                       ` Arthur Miller
  2020-11-24  9:06                                         ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-24  8:15 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

Jean Louis <bugs@gnu.support> writes:

> * Arthur Miller <arthur.miller@live.com> [2020-11-24 00:23]:
>> Jean Louis <bugs@gnu.support> writes:
>> 
>> > * Arthur Miller <arthur.miller@live.com> [2020-11-23 23:22]:
>> >> The only thing that changed regularly was of course system updates: kernel,
>> >> gcc & co etc. So it maybe is as mentioned earlier in this thread by
>> >> either you or somebody else is that glibc changed and that maybe
>> >> triggers something in Emacs based on how Emacs use it. I don't know I am
>> >> not expert in this. Isn't Valgrind good for this kind of problems? Can I
>> >> run emacs as a systemd service in Valgrind?
>> >
>> > I did not change anything like glibc or kernel in Hyperbola
>> > GNU/Linux-libre
>> Didn't you update your system since last summer?
>
> I am pulling Emacs from git and consider system upgraded that way.
same here

> For system packages, pacman says there is nothing to do most of time,
> unless there is new kernel or some security issue.

Aha, you are running LTS kernel?

Mine pacman brings in updates every day.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24  8:15                                       ` Arthur Miller
@ 2020-11-24  9:06                                         ` Jean Louis
  2020-11-24  9:27                                           ` Arthur Miller
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-24  9:06 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Arthur Miller <arthur.miller@live.com> [2020-11-24 11:15]:
> > I am pulling Emacs from git and consider system upgraded that way.
> same here
> 
> > For system packages, pacman says there is nothing to do most of time,
> > unless there is new kernel or some security issue.
> 
> Aha, you are running LTS kernel?
> 
> Mine pacman brings in updates every day.

Really?

/boot:

config-linux-libre-lts
grub
initramfs-linux-libre-lts-fallback.img
initramfs-linux-libre-lts.img
vmlinuz-linux-libre-lts

So you have Hyperbola and you get updates every day? How comes?






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24  9:06                                         ` Jean Louis
@ 2020-11-24  9:27                                           ` Arthur Miller
  2020-11-24 17:18                                             ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-24  9:27 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

Jean Louis <bugs@gnu.support> writes:

> * Arthur Miller <arthur.miller@live.com> [2020-11-24 11:15]:
>> > I am pulling Emacs from git and consider system upgraded that way.
>> same here
>> 
>> > For system packages, pacman says there is nothing to do most of time,
>> > unless there is new kernel or some security issue.
>> 
>> Aha, you are running LTS kernel?
>> 
>> Mine pacman brings in updates every day.
>
> Really?
Yepp; but I am not on lts-kernel, that is probably why.

> /boot:
>
> config-linux-libre-lts
> grub
> initramfs-linux-libre-lts-fallback.img
> initramfs-linux-libre-lts.img
> vmlinuz-linux-libre-lts
>
> So you have Hyperbola and you get updates every day? How comes?
No Hyperbola don't even know what distro it is; Just Arch Linux here.

I guess because I am not on lts-kernel and probably because I have lots
of stuff installed.

Harddrive is cheap nowdays. I have entire kde/gnome stack installed; and
lots more. When I need to compile a library or application I don't want
ot chase dependencies around. I just don't use them as desktops and
don't run apps.  For example yesterday I was just able to git clone
heaptrack and compile it, no headaches.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-23 21:30                   ` Trevor Bentley
  2020-11-23 22:11                     ` Trevor Bentley
@ 2020-11-24 16:07                     ` Eli Zaretskii
  2020-11-24 19:05                       ` Trevor Bentley
  2020-11-25 17:45                       ` Carlos O'Donell
  1 sibling, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-24 16:07 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>  michael_heerdegen@web.de, carlos@redhat.com
> Cc: 
> Date: Mon, 23 Nov 2020 22:30:57 +0100
> 
> ;;--------------------------------------
> ;;--------------------------------------
> ;; 3 days later
> ;; RSS was steady at 1150MB
> ;; leaped to 2.3GB very suddenly
> ;;
> ;; RSS 2311M
> ;;--------------------------------------
> ;;--------------------------------------

> ;; ~182MB
> (let ((size 0))
>   (dolist (buffer (buffer-list) size)
>     (setq size (+ size (buffer-size buffer)))))
> 182903045
> 
> ;; sums to ~142MB if I'm reading it right?
> (garbage-collect)
> ((conses 16 2081486 2630206) (symbols 48 61019 79) (strings 32 353371 288980) (string-bytes 1 13294206) (vectors 16 144742) (vector-slots 8 9503757 592939) (floats 8 2373 8320) (intervals 56 46660 10912) (buffers 992 82))

> (reduce '+ (cl-loop for thing in (garbage-collect)
>                     collect (* (nth 1 thing) (nth 2 thing))))
> 142115406
> 
> ;; malloc-info
> (malloc-info)
> <malloc version="1">
> <heap nr="0">
> <sizes>
>   <size from="33" to="48" total="240" count="5"/>
>   <size from="113" to="128" total="128" count="1"/>
> [...]
>   <size from="3137" to="3569" total="2372709" count="709"/>
>   <size from="3585" to="4081" total="1847856" count="480"/>
>   <size from="4097" to="4593" total="5672856" count="1320"/>
>   <size from="4609" to="5105" total="4675836" count="956"/>
>   <size from="5121" to="5617" total="6883318" count="1286"/>
>   <size from="5633" to="6129" total="6011919" count="1023"/>
>   <size from="6145" to="6641" total="6239871" count="975"/>
>   <size from="6657" to="7153" total="6540165" count="949"/>
>   <size from="7169" to="7665" total="5515848" count="744"/>
>   <size from="7681" to="8177" total="5148216" count="648"/>
>   <size from="8193" to="8689" total="8190223" count="975"/>
>   <size from="8705" to="9201" total="5854315" count="651"/>
>   <size from="9217" to="9713" total="5312354" count="562"/>
>   <size from="9729" to="10225" total="5154212" count="516"/>
>   <size from="10241" to="10737" total="4074005" count="389"/>
>   <size from="10753" to="12273" total="11387550" count="990"/>
>   <size from="12289" to="16369" total="32661229" count="2317"/>
>   <size from="16385" to="20465" total="36652437" count="2037"/>
>   <size from="20481" to="24561" total="21272131" count="947"/>
>   <size from="24577" to="28657" total="25462302" count="958"/>
>   <size from="28673" to="32753" total="28087234" count="914"/>
>   <size from="32769" to="36849" total="39080113" count="1121"/>
>   <size from="36865" to="40945" total="30141527" count="775"/>
>   <size from="40961" to="65521" total="166092799" count="3119"/>
>   <size from="65537" to="98289" total="218425380" count="2692"/>
>   <size from="98321" to="131057" total="178383171" count="1555"/>
>   <size from="131089" to="163825" total="167800886" count="1142"/>
>   <size from="163841" to="262065" total="367649915" count="1819"/>
>   <size from="262161" to="522673" total="185347984" count="560"/>
>   <size from="525729" to="30878897" total="113322865" count="97"/>

Look at the large chunks in the tail of this.  Together, they do
account for ~2GB.

Carlos, are these chunks in use (i.e. allocated and not freed), or are
they the free chunks that are available for allocation, but not
released to the OS?  If the former, then it sounds like this session
does have around 2GB of allocated heap data, so either there's some
allocated memory we don't account for, or there is indeed a memory
leak in Emacs.  If these are the free chunks, then the way glibc
manages free'd memory is indeed an issue.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24  9:27                                           ` Arthur Miller
@ 2020-11-24 17:18                                             ` Jean Louis
  2020-11-25 14:59                                               ` Arthur Miller
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-24 17:18 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Arthur Miller <arthur.miller@live.com> [2020-11-24 12:27]:
> Yepp; but I am not on lts-kernel, that is probably why.

I think it is the other issue that you hve many packages, I also have
many for Gnome and KDE but do not get updates, maybe I use mirror that
is not updated. I will see that.

> > So you have Hyperbola and you get updates every day? How comes?
> No Hyperbola don't even know what distro it is; Just Arch Linux
> here.

Well then it is different thing. You are updating from different
repository than me.

> Harddrive is cheap nowdays. I have entire kde/gnome stack installed; and
> lots more. When I need to compile a library or application I don't want
> ot chase dependencies around. I just don't use them as desktops and
> don't run apps.  For example yesterday I was just able to git clone
> heaptrack and compile it, no headaches.

That is different OS and Hyperbola is different. Arch Linux has lax policy
against non-free software, while Hyperbola GNU/Linux-libre has very
strict policy and does not allow anything non-free, that is reason I
am using it. It does not use systemd trap and is working stable.

Few times I got problem with building for example webkit, but
otherwise anything builds pretty well.

Hyperbola is independent project that receives little support, it
should receive so much more. They will also create new HyperbolaBSD
system that will move an OpenBSD kernel into GNU GPL direction.

Jean





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24 16:07                     ` Eli Zaretskii
@ 2020-11-24 19:05                       ` Trevor Bentley
  2020-11-24 19:35                         ` Eli Zaretskii
  2020-11-25 17:45                       ` Carlos O'Donell
  1 sibling, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-24 19:05 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:
> 
> Look at the large chunks in the tail of this.  Together, they do 
> account for ~2GB. 
> 
> Carlos, are these chunks in use (i.e. allocated and not freed), 
> or are they the free chunks that are available for allocation, 
> but not released to the OS?  If the former, then it sounds like 
> this session does have around 2GB of allocated heap data, so 
> either there's some allocated memory we don't account for, or 
> there is indeed a memory leak in Emacs.  If these are the free 
> chunks, then the way glibc manages free'd memory is indeed an 
> issue. 

I just updated the log on my website.  Same instance a day later, 
after yet another memory spike up to 4.3GB.  Concatenated to the 
end:

https://trevorbentley.com/emacs_malloc_info.log

Some interesting observations:
 - (garbage-collect) takes forever, like on the order of 5-10 
 minutes, with one CPU core pegged to 100% and emacs frozen.
 - The leaking stops for a while after (garbage-collect).  It was 
 leaking 1MB per second for this last log, and stopped growing 
 after the garbage collection.

Question 1: (garbage-collect) shows the memory usage *after* 
collecting, right?  Is there any way to get the same info without 
actually reaping dead references?  It could be that there really 
were 4.3GB of dead references.

Question 2: are the background garbage collections equivalent to 
the (garbage-collect) function?  I certainly don't notice 5-10 
minute long pauses during normal use, though "gcs-done" is 
incrementing.  Does it have a different algorithm for partial 
collection during idle, perhaps?

Question 3: I've never used the malloc_trim() function.  Could 
that be something worth experimenting with, to see if it releases 
any of the massive heap back to the OS?

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24 19:05                       ` Trevor Bentley
@ 2020-11-24 19:35                         ` Eli Zaretskii
  2020-11-25 10:22                           ` Trevor Bentley
  2020-11-25 17:48                           ` Carlos O'Donell
  0 siblings, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-24 19:35 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de, carlos@redhat.com
> Cc: 
> Date: Tue, 24 Nov 2020 20:05:15 +0100
> 
> I just updated the log on my website.  Same instance a day later, 
> after yet another memory spike up to 4.3GB.  Concatenated to the 
> end:
> 
> https://trevorbentley.com/emacs_malloc_info.log

I don't think I can interpret that.  In particular, how come "total"
is 4GB, but I see no comparable sizes in any of the other fields?
where do those 4GB hide?  Carlos, can you help interpreting this
report?

> Some interesting observations:
>  - (garbage-collect) takes forever, like on the order of 5-10 
>  minutes, with one CPU core pegged to 100% and emacs frozen.

Is this with the default values of gc-cons-threshold and
gc-cons-percentage?

>  - The leaking stops for a while after (garbage-collect).  It was 
>  leaking 1MB per second for this last log, and stopped growing 
>  after the garbage collection.

Now, what happens in that session once per second (in an otherwise
idle Emacs, I presume?) to cause such memory consumption?  Some
timers?  If you run with a breakpoint in malloc that just shows the
backtrace and continues, do you see what could consume 1MB every
second?

> Question 1: (garbage-collect) shows the memory usage *after* 
> collecting, right?

Yes.

> Is there any way to get the same info without actually reaping dead
> references?

What do you mean by "reaping dead references" here?

> It could be that there really were 4.3GB of dead references.

Not sure I understand what are you trying to establish here.

> Question 2: are the background garbage collections equivalent to 
> the (garbage-collect) function?  I certainly don't notice 5-10 
> minute long pauses during normal use, though "gcs-done" is 
> incrementing.  Does it have a different algorithm for partial 
> collection during idle, perhaps?

There's only one garbage-collect, it is called for _any_ GC.

What do you mean by "during normal use" in this sentence:

  I certainly don't notice 5-10 minute long pauses during normal use,
  though "gcs-done" is incrementing.

How is what you did here, where GC took several minutes, different
from "normal usage"?

> Question 3: I've never used the malloc_trim() function.  Could 
> that be something worth experimenting with, to see if it releases 
> any of the massive heap back to the OS?

That's for glibc guys to answer.

Thanks.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24 19:35                         ` Eli Zaretskii
@ 2020-11-25 10:22                           ` Trevor Bentley
  2020-11-25 17:47                             ` Eli Zaretskii
  2020-11-25 17:48                           ` Carlos O'Donell
  1 sibling, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-25 10:22 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes: 
>> Some interesting observations: 
>>  - (garbage-collect) takes forever, like on the order of 5-10 
>>  minutes, with one CPU core pegged to 100% and emacs frozen. 
> 
> Is this with the default values of gc-cons-threshold and 
> gc-cons-percentage? 

Yes, and they're both printed in the logs: threshold 800000, 
percentage 0.1.
 
>>  - The leaking stops for a while after (garbage-collect).  It 
>>  was  leaking 1MB per second for this last log, and stopped 
>>  growing  after the garbage collection. 
> 
> Now, what happens in that session once per second (in an 
> otherwise idle Emacs, I presume?) to cause such memory 
> consumption?  Some timers?  If you run with a breakpoint in 
> malloc that just shows the backtrace and continues, do you see 
> what could consume 1MB every second? 

Not an idle emacs at all, in this case.  I have seen the memory 
growth in an idle emacs, but the only one I can reproduce it on is 
the emacs-slack one, which is connected to a corporate Slack 
account.  Tons of short messages streaming in over the network and 
being displayed in rotating buffers, with images mixed in.  It's a 
big 'ol "web 2.0" API... it can easily pass 1MB/s of bloated JSON 
messages through.  This is one _very active_ emacs.

The original strace logs and valgrind output I posted before 
showed a random assortment of calls from gnutls, imagemagick, and 
lisp strings, with lisp strings dominating the malloc calls 
(enlarge_buffer_text, mostly).
 
>> Is there any way to get the same info without actually reaping 
>> dead references? 
> 
> What do you mean by "reaping dead references" here? 
> 
>> It could be that there really were 4.3GB of dead references. 
> 
> Not sure I understand what are you trying to establish here. 
>

GC is running through a list of active allocations and freeing the 
ones with no remaining references, right?  Presumably, if a lot of 
active malloc() allocations are no longer refernced, and 
(garbage-collect) calls free() on a bunch of blocks.  I'm 
wondering how to figure out how much memory a call to 
(garbage-collect) has actually freed.  Possibly a sort of "dry 
run" where it performs the GC algorithm, but doesn't release any 
memory.

(I'm very much assuming how emacs memory management works.  Please 
corect me if I'm wrong.)
 
> There's only one garbage-collect, it is called for _any_ GC. 
> 
> What do you mean by "during normal use" in this sentence: 
> 
>   I certainly don't notice 5-10 minute long pauses during normal 
>   use, though "gcs-done" is incrementing. 
> 
> How is what you did here, where GC took several minutes, 
> different from "normal usage"?

In this log, I am explicitly executing "(garbage-collect)", and it 
takes 10 minutes, during which the UI is unresponsive and 
sometimes even turns grey when the window stops redrawing.

By "normal use", I mean that I use this emacs instance on-and-off 
all day long.  I would notice if it were freezing for minutes at a 
time, and it definitely is not.

As far as I understand, garbage collection is supposed to happen 
automatically during idle.  I would certainly notice if it locked 
up the whole instance for 10 minutes from an idle GC.  I think 
this means the automatic garbage collection is either not 
happening, or running on a different thread, or being interrupted, 
or simply works differently.  I have no idea, hence asking you :)

The confusing part is that "gcs-done" increments a lot between my 
manual (garbage-collect) calls.  It looks like it does about 500 
per day.  There is no way emacs freezes and pegs a CPU core to max 
500 times per day, but it does exactly that every time I manually 
execute garbage-collect. 

Side note: it inflated to 7670MB overnight.  I'm running 
(garbage-collect) as I type this, but it has been churning for 30 
minutes with the UI frozen, and still isn't done.  I'm going to 
give up and kill it if it doesn't finish soon, as I kind of need 
that 8GB back.
 
-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24 17:18                                             ` Jean Louis
@ 2020-11-25 14:59                                               ` Arthur Miller
  2020-11-25 15:09                                                 ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Arthur Miller @ 2020-11-25 14:59 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

Jean Louis <bugs@gnu.support> writes:

> * Arthur Miller <arthur.miller@live.com> [2020-11-24 12:27]:
>> Yepp; but I am not on lts-kernel, that is probably why.
>
> I think it is the other issue that you hve many packages, I also have
> many for Gnome and KDE but do not get updates, maybe I use mirror that
> is not updated. I will see that.
>
>> > So you have Hyperbola and you get updates every day? How comes?
>> No Hyperbola don't even know what distro it is; Just Arch Linux
>> here.
>
> Well then it is different thing. You are updating from different
> repository than me.
>
>> Harddrive is cheap nowdays. I have entire kde/gnome stack installed; and
>> lots more. When I need to compile a library or application I don't want
>> ot chase dependencies around. I just don't use them as desktops and
>> don't run apps.  For example yesterday I was just able to git clone
>> heaptrack and compile it, no headaches.
>
> That is different OS and Hyperbola is different. Arch Linux has lax policy
> against non-free software, while Hyperbola GNU/Linux-libre has very
> strict policy and does not allow anything non-free, that is reason I
> am using it. It does not use systemd trap and is working stable.
>
> Few times I got problem with building for example webkit, but
> otherwise anything builds pretty well.
>
> Hyperbola is independent project that receives little support, it
> should receive so much more. They will also create new HyperbolaBSD
> system that will move an OpenBSD kernel into GNU GPL direction.
>
> Jean
Oki; thansk. I never heard of the Hypberbola before.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 14:59                                               ` Arthur Miller
@ 2020-11-25 15:09                                                 ` Jean Louis
  0 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-25 15:09 UTC (permalink / raw)
  To: Arthur Miller; +Cc: fweimer, 43389, dj, michael_heerdegen, trevor, carlos

* Arthur Miller <arthur.miller@live.com> [2020-11-25 17:59]:
> > Hyperbola is independent project that receives little support, it
> > should receive so much more. They will also create new HyperbolaBSD
> > system that will move an OpenBSD kernel into GNU GPL direction.
> >
> > Jean
> Oki; thansk. I never heard of the Hypberbola before.

https://www.hyperbola.info

And there are other fully free operating systems endorsed by the FSF
such as:

Trisquel GNU/Linux-libre
https://trisquel.info

and others on https://www.gnu.org

Those are only that I am using due to agreement among people to
provide fully free software without access to anything non-free.

Jean





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24 16:07                     ` Eli Zaretskii
  2020-11-24 19:05                       ` Trevor Bentley
@ 2020-11-25 17:45                       ` Carlos O'Donell
  2020-11-25 18:03                         ` Eli Zaretskii
                                           ` (2 more replies)
  1 sibling, 3 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-25 17:45 UTC (permalink / raw)
  To: Eli Zaretskii, Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen

On 11/24/20 11:07 AM, Eli Zaretskii wrote:
> Look at the large chunks in the tail of this.  Together, they do
> account for ~2GB.
> 
> Carlos, are these chunks in use (i.e. allocated and not freed), or are
> they the free chunks that are available for allocation, but not
> released to the OS?  If the former, then it sounds like this session
> does have around 2GB of allocated heap data, so either there's some
> allocated memory we don't account for, or there is indeed a memory
> leak in Emacs.  If these are the free chunks, then the way glibc
> manages free'd memory is indeed an issue.

These chunks are all free and mapped for use by the algorithm to satisfy
a request by the application.

Looking at the last malloc_info (annotated):
https://trevorbentley.com/emacs_malloc_info.log
===============================================
;; malloc-info
(malloc-info)
<malloc version="1">
<heap nr="0">
<sizes>
</sizes>
<total type="fast" count="0" size="0"/>

=> No fast bins.

<total type="rest" count="1" size="112688"/>

=> 1 unused bin.

=> In total we have only 112KiB in 1 unused chunk free'd on the stack.
=> The rest of the stack is in use by the application.
=> It looks like the application usage goes down to zero and then up again?

<system type="current" size="4243079168"/>

=> Currently at 4.2GiB in arena 0 (kernel assigned heap).
=> The application is using that sbrk'd memory.

<system type="max" size="4243079168"/>
<aspace type="total" size="4243079168"/>
<aspace type="mprotect" size="4243079168"/>

=> This indicates *real* API usage of 4.2GiB.

</heap>
<heap nr="1">

=> This is arena 1, which is a thread heap, and uses mmap to create heaps.

<sizes>
  <size from="17" to="32" total="32" count="1"/>
  <size from="33" to="48" total="240" count="5"/>
  <size from="49" to="64" total="256" count="4"/>
  <size from="65" to="80" total="160" count="2"/>
  <size from="97" to="112" total="224" count="2"/>
  <size from="33" to="33" total="231" count="7"/>
  <size from="49" to="49" total="294" count="6"/>
  <size from="65" to="65" total="390" count="6"/>
  <size from="81" to="81" total="162" count="2"/>
  <size from="97" to="97" total="97" count="1"/>
  <size from="129" to="129" total="516" count="4"/>
  <size from="161" to="161" total="644" count="4"/>
  <size from="209" to="209" total="1254" count="6"/>
  <size from="241" to="241" total="241" count="1"/>
  <size from="257" to="257" total="257" count="1"/>
  <size from="305" to="305" total="610" count="2"/>
  <size from="32209" to="32209" total="32209" count="1"/>
  <size from="3982129" to="8059889" total="28065174" count="6"/>
  <unsorted from="209" to="4020593" total="4047069" count="13"/>
</sizes>
<total type="fast" count="14" size="912"/>
<total type="rest" count="61" size="42357420"/>

=> Pretty small, 912 bytes in fastbins, and 42MiB in cached chunks.

<system type="current" size="42426368"/>
<system type="max" size="42426368"/>
<aspace type="total" size="42426368"/>
<aspace type="mprotect" size="42426368"/>
<aspace type="subheaps" size="1"/>
</heap>
<total type="fast" count="14" size="912"/>
<total type="rest" count="62" size="42470108"/>
<total type="mmap" count="9" size="208683008"/>
<system type="current" size="4285505536"/>
<system type="max" size="4285505536"/>
<aspace type="total" size="4285505536"/>
<aspace type="mprotect" size="4285505536"/>
</malloc>
===============================================

This shows the application is USING memory on the main system heap.

It might not be "leaked" memory since the application might be using it.

You want visibility into what is USING that memory.

With glibc-malloc-trace-utils you can try to do that with:

LD_PRELOAD=libmtrace.so \
MTRACE_CTL_FILE=/home/user/app.mtr \
MTRACE_CTL_BACKTRACE=1 \
./app

This will use libgcc's unwinder to get a copy of the malloc caller
address and then we'll have to decode that based on a /proc/self/maps.

Next steps:
- Get a glibc-malloc-trace-utils trace of the application ratcheting.
- Get a copy of /proc/$PID/maps for the application (shorter version of smaps).

Then we might be able to correlate where all the kernel heap data went?

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 10:22                           ` Trevor Bentley
@ 2020-11-25 17:47                             ` Eli Zaretskii
  2020-11-25 19:06                               ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-25 17:47 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de, carlos@redhat.com
> Date: Wed, 25 Nov 2020 11:22:16 +0100
> 
> >>  - The leaking stops for a while after (garbage-collect).  It 
> >>  was  leaking 1MB per second for this last log, and stopped 
> >>  growing  after the garbage collection. 
> > 
> > Now, what happens in that session once per second (in an 
> > otherwise idle Emacs, I presume?) to cause such memory 
> > consumption?  Some timers?  If you run with a breakpoint in 
> > malloc that just shows the backtrace and continues, do you see 
> > what could consume 1MB every second? 
> 
> Not an idle emacs at all, in this case.  I have seen the memory 
> growth in an idle emacs, but the only one I can reproduce it on is 
> the emacs-slack one, which is connected to a corporate Slack 
> account.  Tons of short messages streaming in over the network and 
> being displayed in rotating buffers, with images mixed in.  It's a 
> big 'ol "web 2.0" API... it can easily pass 1MB/s of bloated JSON 
> messages through.  This is one _very active_ emacs.

Then I don't think we will be able to understand what consumes memory
at such high rate without some debugging.  Have you considered using
breakpoints and collecting backtraces, as I suggested earlier?

The hard problem is to understand which memory is allocated and not
freed "soon enough", but for such a high rate of memory consumption
perhaps just knowing which code request so much memory would be an
important clue.

> The original strace logs and valgrind output I posted before 
> showed a random assortment of calls from gnutls, imagemagick, and 
> lisp strings, with lisp strings dominating the malloc calls 
> (enlarge_buffer_text, mostly).

Enlarging buffer text generally causes malloc to call mmap (as opposed
to brk/sbrk), so this cannot cause the situation where a lot of unused
memory that is not returned to the OS.  And we already saw that just
by summing up the buffer text memory we never get even close to the VM
size of the process.

> > What do you mean by "reaping dead references" here? 
> > 
> >> It could be that there really were 4.3GB of dead references. 
> > 
> > Not sure I understand what are you trying to establish here. 
> 
> GC is running through a list of active allocations and freeing the 
> ones with no remaining references, right?  Presumably, if a lot of 
> active malloc() allocations are no longer refernced, and 
> (garbage-collect) calls free() on a bunch of blocks.

We only call free on "unfragmented" Lisp data, e.g. if some block of
Lisp strings was freed in its entirety.  If some Lisp objects in a
block are still alive, we don't free the block, we just mark the freed
Lisp objects as being free and available for reuse.

So the result of GC shows only tells you how much of the memory was
freed but NOT returned to glibc, it doesn't show how much was actually
free'd.

> I'm wondering how to figure out how much memory a call to
> (garbage-collect) has actually freed.  Possibly a sort of "dry run"
> where it performs the GC algorithm, but doesn't release any memory.

"Freed" in what sense? returned to glibc?

> > There's only one garbage-collect, it is called for _any_ GC. 
> > 
> > What do you mean by "during normal use" in this sentence: 
> > 
> >   I certainly don't notice 5-10 minute long pauses during normal 
> >   use, though "gcs-done" is incrementing. 
> > 
> > How is what you did here, where GC took several minutes, 
> > different from "normal usage"?
> 
> In this log, I am explicitly executing "(garbage-collect)", and it 
> takes 10 minutes, during which the UI is unresponsive and 
> sometimes even turns grey when the window stops redrawing.
> 
> By "normal use", I mean that I use this emacs instance on-and-off 
> all day long.  I would notice if it were freezing for minutes at a 
> time, and it definitely is not.
> 
> As far as I understand, garbage collection is supposed to happen 
> automatically during idle.  I would certainly notice if it locked 
> up the whole instance for 10 minutes from an idle GC.  I think 
> this means the automatic garbage collection is either not 
> happening, or running on a different thread, or being interrupted, 
> or simply works differently.  I have no idea, hence asking you :)

That is very strange.  There's only one function to perform GC, and it
is called both from garbage-collect and from an internal function
called when Emacs is idle or when it calls interpreter functions like
'eval' or 'funcall'.  The only thing garbage-collect does that the
internal function doesn't is generate the list that is the return
value of garbage-collect, but that cannot possibly take minutes.

I suggest to set garbage-collection-messages non-nil, then you should
see when each GC, whether the one you invoke interactively or the
automatic one, starts and ends.  maybe the minutes you wait are not
directly related to GC, but to something else that is triggered by GC?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-24 19:35                         ` Eli Zaretskii
  2020-11-25 10:22                           ` Trevor Bentley
@ 2020-11-25 17:48                           ` Carlos O'Donell
  1 sibling, 0 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-25 17:48 UTC (permalink / raw)
  To: Eli Zaretskii, Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen

On 11/24/20 2:35 PM, Eli Zaretskii wrote:
>> From: Trevor Bentley <trevor@trevorbentley.com>
>> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>>  dj@redhat.com, michael_heerdegen@web.de, carlos@redhat.com
>> Cc: 
>> Date: Tue, 24 Nov 2020 20:05:15 +0100
>>
>> I just updated the log on my website.  Same instance a day later, 
>> after yet another memory spike up to 4.3GB.  Concatenated to the 
>> end:
>>
>> https://trevorbentley.com/emacs_malloc_info.log
> 
> I don't think I can interpret that.  In particular, how come "total"
> is 4GB, but I see no comparable sizes in any of the other fields?
> where do those 4GB hide?  Carlos, can you help interpreting this
> report?

The 4GiB are in use by the application and it is up to us to increase
the observability of that usage with our tooling.

>> Question 3: I've never used the malloc_trim() function.  Could 
>> that be something worth experimenting with, to see if it releases 
>> any of the massive heap back to the OS?
> 
> That's for glibc guys to answer.

If malloc_info() shows memory that is free'd and unused then malloc_trim()
can free back any unused pages to the OS.

However, in your last day malloc_info() output you only show ~50MiB of
unused memory out of ~4GiB, so calling malloc_trim() would only free
~50MiB. There is heavy usage of the kernel heap by something. Finding
out what is using that memory is our next step.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 17:45                       ` Carlos O'Donell
@ 2020-11-25 18:03                         ` Eli Zaretskii
  2020-11-25 18:57                           ` Carlos O'Donell
  2020-11-26  9:09                           ` Jean Louis
  2020-11-25 18:08                         ` Jean Louis
  2020-11-26 12:37                         ` Trevor Bentley
  2 siblings, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-25 18:03 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Wed, 25 Nov 2020 12:45:04 -0500
> 
> On 11/24/20 11:07 AM, Eli Zaretskii wrote:
> > Look at the large chunks in the tail of this.  Together, they do
> > account for ~2GB.
> > 
> > Carlos, are these chunks in use (i.e. allocated and not freed), or are
> > they the free chunks that are available for allocation, but not
> > released to the OS?  If the former, then it sounds like this session
> > does have around 2GB of allocated heap data, so either there's some
> > allocated memory we don't account for, or there is indeed a memory
> > leak in Emacs.  If these are the free chunks, then the way glibc
> > manages free'd memory is indeed an issue.
> 
> These chunks are all free and mapped for use by the algorithm to satisfy
> a request by the application.

So we have more than 1.5GB free memory available for allocation, is
that right?

But then how to reconcile this with what you say next:

> <system type="current" size="4243079168"/>
> 
> => Currently at 4.2GiB in arena 0 (kernel assigned heap).
> => The application is using that sbrk'd memory.
> 
> <system type="max" size="4243079168"/>
> <aspace type="total" size="4243079168"/>
> <aspace type="mprotect" size="4243079168"/>
> 
> => This indicates *real* API usage of 4.2GiB.

Here you seem to say that these 4.2GB are _used_ by the application?
While I thought the large chunks I asked about, which total more than
1.5GB, are a significant part of those 4.2GB?

To make sure there are no misunderstandings, I'm talking about this
part of the log:

  <heap nr="0">
  <sizes>
    [...]
    <size from="10753" to="12273" total="11387550" count="990"/>
    <size from="12289" to="16369" total="32661229" count="2317"/>
    <size from="16385" to="20465" total="36652437" count="2037"/>
    <size from="20481" to="24561" total="21272131" count="947"/>
    <size from="24577" to="28657" total="25462302" count="958"/>
    <size from="28673" to="32753" total="28087234" count="914"/>
    <size from="32769" to="36849" total="39080113" count="1121"/>
    <size from="36865" to="40945" total="30141527" count="775"/>
    <size from="40961" to="65521" total="166092799" count="3119"/>
    <size from="65537" to="98289" total="218425380" count="2692"/>
    <size from="98321" to="131057" total="178383171" count="1555"/>
    <size from="131089" to="163825" total="167800886" count="1142"/>
    <size from="163841" to="262065" total="367649915" count="1819"/>
    <size from="262161" to="522673" total="185347984" count="560"/>
    <size from="525729" to="30878897" total="113322865" count="97"/>
    <unsorted from="33" to="33" total="33" count="1"/>
  </sizes>

If I sum up the "total=" parts of these large numbers, I get 1.6GB.
Is this free memory, given back to glibc for future allocations from
this arena, and if so, are those 1.6GB part of the 4.2GB total?

> This shows the application is USING memory on the main system heap.
> 
> It might not be "leaked" memory since the application might be using it.
> 
> You want visibility into what is USING that memory.
> 
> With glibc-malloc-trace-utils you can try to do that with:
> 
> LD_PRELOAD=libmtrace.so \
> MTRACE_CTL_FILE=/home/user/app.mtr \
> MTRACE_CTL_BACKTRACE=1 \
> ./app
> 
> This will use libgcc's unwinder to get a copy of the malloc caller
> address and then we'll have to decode that based on a /proc/self/maps.
> 
> Next steps:
> - Get a glibc-malloc-trace-utils trace of the application ratcheting.
> - Get a copy of /proc/$PID/maps for the application (shorter version of smaps).
> 
> Then we might be able to correlate where all the kernel heap data went?

Thanks for the instructions.  Would people please try that and report
the results?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 17:45                       ` Carlos O'Donell
  2020-11-25 18:03                         ` Eli Zaretskii
@ 2020-11-25 18:08                         ` Jean Louis
  2020-11-25 18:51                           ` Trevor Bentley
  2020-11-25 19:01                           ` Carlos O'Donell
  2020-11-26 12:37                         ` Trevor Bentley
  2 siblings, 2 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-25 18:08 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, dj, michael_heerdegen, Trevor Bentley

* Carlos O'Donell <carlos@redhat.com> [2020-11-25 20:45]:
> With glibc-malloc-trace-utils you can try to do that with:
> 
> LD_PRELOAD=libmtrace.so \
> MTRACE_CTL_FILE=/home/user/app.mtr \
> MTRACE_CTL_BACKTRACE=1 \
> ./app
> 
> This will use libgcc's unwinder to get a copy of the malloc caller
> address and then we'll have to decode that based on a
> /proc/self/maps.

I will also try that in the next session.

One problem I have here is that since I run this session I have not
get any problem. My uptime is over 2 days, I have not changed my
habbits of work within Emacs and my swap remains under 200 MB and only
10% memory used by Emacs, normally 80-90%

Almost by the rule I could not run longer than 1 day until I would get
swap of about 3 GB - 4 GB and not responsive Emacs.

Can it be that libmtrace.so could prevent something happening what is
normally happening?






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 18:08                         ` Jean Louis
@ 2020-11-25 18:51                           ` Trevor Bentley
  2020-11-25 19:02                             ` Carlos O'Donell
  2020-11-25 19:01                           ` Carlos O'Donell
  1 sibling, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-25 18:51 UTC (permalink / raw)
  To: Jean Louis, Carlos O'Donell; +Cc: fweimer, 43389, dj, michael_heerdegen

Jean Louis <bugs@gnu.support> writes:

>> This will use libgcc's unwinder to get a copy of the malloc 
>> caller address and then we'll have to decode that based on a 
>> /proc/self/maps. 
> 
> I will also try that in the next session. 

As will I, but probably won't set it up until this weekend.
 
> One problem I have here is that since I run this session I have 
> not get any problem. My uptime is over 2 days, I have not 
> changed my habbits of work within Emacs and my swap remains 
> under 200 MB and only 10% memory used by Emacs, normally 80-90% 
> 
> Almost by the rule I could not run longer than 1 day until I 
> would get swap of about 3 GB - 4 GB and not responsive Emacs. 
> 
> Can it be that libmtrace.so could prevent something happening 
> what is normally happening? 

I see high variation in how long it takes to hit it on my machine. 
The shortest was after ~4 hours, average is 1.5 days, and the 
longest was 5 days.  Perhaps you're seeing the same.

I also still hit it while running under Valgrind; the whole emacs 
session was slow as hell, but still managed to blow out its heap 
in a few days.  Of course, libmtrace could be different, but at 
least it doesn't seem to be a heisenbug.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 18:03                         ` Eli Zaretskii
@ 2020-11-25 18:57                           ` Carlos O'Donell
  2020-11-25 19:13                             ` Eli Zaretskii
  2020-11-26  9:09                           ` Jean Louis
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-25 18:57 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

On 11/25/20 1:03 PM, Eli Zaretskii wrote:
>> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>>  dj@redhat.com, michael_heerdegen@web.de
>> From: Carlos O'Donell <carlos@redhat.com>
>> Date: Wed, 25 Nov 2020 12:45:04 -0500
>>
>> On 11/24/20 11:07 AM, Eli Zaretskii wrote:
>>> Look at the large chunks in the tail of this.  Together, they do
>>> account for ~2GB.
>>>
>>> Carlos, are these chunks in use (i.e. allocated and not freed), or are
>>> they the free chunks that are available for allocation, but not
>>> released to the OS?  If the former, then it sounds like this session
>>> does have around 2GB of allocated heap data, so either there's some
>>> allocated memory we don't account for, or there is indeed a memory
>>> leak in Emacs.  If these are the free chunks, then the way glibc
>>> manages free'd memory is indeed an issue.
>>
>> These chunks are all free and mapped for use by the algorithm to satisfy
>> a request by the application.
> 
> So we have more than 1.5GB free memory available for allocation, is
> that right?

There are 3 malloc_info traces in the log.

1. Lines 47-219. Day 1: 1100MiB of RSS.
2. Lines 386-556. Day 4: 2.3GiB of RSS.
3. Lines 744-792. Day 5: 4.2GiB of RSS.

Lines are numbered for the log starting at 1.
 
> To make sure there are no misunderstandings, I'm talking about this
> part of the log:

Your analysis is for trace #2, lines 386-556.

My analysis was for trace #3, lines 744-792.

>   <heap nr="0">
>   <sizes>
>     [...]
>     <size from="10753" to="12273" total="11387550" count="990"/>
>     <size from="12289" to="16369" total="32661229" count="2317"/>
>     <size from="16385" to="20465" total="36652437" count="2037"/>
>     <size from="20481" to="24561" total="21272131" count="947"/>
>     <size from="24577" to="28657" total="25462302" count="958"/>
>     <size from="28673" to="32753" total="28087234" count="914"/>
>     <size from="32769" to="36849" total="39080113" count="1121"/>
>     <size from="36865" to="40945" total="30141527" count="775"/>
>     <size from="40961" to="65521" total="166092799" count="3119"/>
>     <size from="65537" to="98289" total="218425380" count="2692"/>
>     <size from="98321" to="131057" total="178383171" count="1555"/>
>     <size from="131089" to="163825" total="167800886" count="1142"/>
>     <size from="163841" to="262065" total="367649915" count="1819"/>
>     <size from="262161" to="522673" total="185347984" count="560"/>
>     <size from="525729" to="30878897" total="113322865" count="97"/>
>     <unsorted from="33" to="33" total="33" count="1"/>
>   </sizes>
> 
> If I sum up the "total=" parts of these large numbers, I get 1.6GB.
> Is this free memory, given back to glibc for future allocations from
> this arena, and if so, are those 1.6GB part of the 4.2GB total?

In trace #2 we have these final statistics:

549 <total type="fast" count="39" size="2656"/>
550 <total type="rest" count="44013" size="1755953515"/>
551 <total type="mmap" count="6" size="121565184"/>
552 <system type="current" size="2246778880"/>
553 <system type="max" size="2246778880"/>
554 <aspace type="total" size="2246778880"/>
555 <aspace type="mprotect" size="2246778880"/>
556 </malloc>

This shows ~1.7GiB of unused free chunks. Keep in mind glibc malloc is a
heap-based allocator so if you have FIFO usage pattern you won't see the kernel
heap decrease until you free the most recently allocated chunk. In trace #3 we 
*do* see that application demand consumes all these free chunks again, so
something is using them in the application. There are none left reported in
the malloc_info statistics (could also be chunk corruption).

During trace #2 the only way to free some of the ~1.7GiB in-use by the algorithm
is to call malloc_trim() to free back unused pages (requires free/unsorted chunk
walk and mmumap() calls to the kernel to reduce RSS accounting). Calling malloc_trim
is expensive, particularly if you're just going to use the chunks again, as
appears to be happening the next day.

In trace #3, for which we are at 4.2GiB of RSS usage, we see the following:

742 ;; malloc-info
743 (malloc-info)
744 <malloc version="1">
745 <heap nr="0">
746 <sizes>
747 </sizes>
748 <total type="fast" count="0" size="0"/>
749 <total type="rest" count="1" size="112688"/>

a. Arena 0 (kernel heap) shows 0KiB of unused fast bins, 112KiB of other
   in 1 bin (probably top-chunk).

750 <system type="current" size="4243079168"/>
751 <system type="max" size="4243079168"/>
752 <aspace type="total" size="4243079168"/>
753 <aspace type="mprotect" size="4243079168"/>

b. Arena 0 (kernel heap) shows 4.2GiB "current" which means that the
   sbrk-extended kernel heap is in use up to 4.2GiB.
   WARNING: We count "foreign" uses of sbrk as brk space, so looking for
   sbrk or brk by a foreign source is useful.

754 </heap>
755 <heap nr="1">
756 <sizes>
757   <size from="17" to="32" total="32" count="1"/>
758   <size from="33" to="48" total="240" count="5"/>
759   <size from="49" to="64" total="256" count="4"/>
760   <size from="65" to="80" total="160" count="2"/>
761   <size from="97" to="112" total="224" count="2"/>
762   <size from="33" to="33" total="231" count="7"/>
763   <size from="49" to="49" total="294" count="6"/>
764   <size from="65" to="65" total="390" count="6"/>
765   <size from="81" to="81" total="162" count="2"/>
766   <size from="97" to="97" total="97" count="1"/>
767   <size from="129" to="129" total="516" count="4"/>
768   <size from="161" to="161" total="644" count="4"/>
769   <size from="209" to="209" total="1254" count="6"/>
770   <size from="241" to="241" total="241" count="1"/>
771   <size from="257" to="257" total="257" count="1"/>
772   <size from="305" to="305" total="610" count="2"/>
773   <size from="32209" to="32209" total="32209" count="1"/>
774   <size from="3982129" to="8059889" total="28065174" count="6"/>
775   <unsorted from="209" to="4020593" total="4047069" count="13"/>
776 </sizes>
777 <total type="fast" count="14" size="912"/>
778 <total type="rest" count="61" size="42357420"/>
779 <system type="current" size="42426368"/>
780 <system type="max" size="42426368"/>
781 <aspace type="total" size="42426368"/>
782 <aspace type="mprotect" size="42426368"/>
783 <aspace type="subheaps" size="1"/>

c. Arena 1 has 42MiB of free'd chunks for use.

784 </heap>
785 <total type="fast" count="14" size="912"/>
786 <total type="rest" count="62" size="42470108"/>
787 <total type="mmap" count="9" size="208683008"/>

d. We have:
   - 912KiB of fast bins.
   - 42MiB of regular bins.
   - 200MiB of mmap'd large chunks.

788 <system type="current" size="4285505536"/>
789 <system type="max" size="4285505536"/>
790 <aspace type="total" size="4285505536"/>

e. Total allocated space is 4.2GiB.

791 <aspace type="mprotect" size="4285505536"/>
792 </malloc>

Something is using the kernel heap chunks, or calling sbrk/brk
directly (since foreign brks are counted by our statistics).

>> This shows the application is USING memory on the main system heap.
>>
>> It might not be "leaked" memory since the application might be using it.
>>
>> You want visibility into what is USING that memory.
>>
>> With glibc-malloc-trace-utils you can try to do that with:
>>
>> LD_PRELOAD=libmtrace.so \
>> MTRACE_CTL_FILE=/home/user/app.mtr \
>> MTRACE_CTL_BACKTRACE=1 \
>> ./app
>>
>> This will use libgcc's unwinder to get a copy of the malloc caller
>> address and then we'll have to decode that based on a /proc/self/maps.
>>
>> Next steps:
>> - Get a glibc-malloc-trace-utils trace of the application ratcheting.
>> - Get a copy of /proc/$PID/maps for the application (shorter version of smaps).
>>
>> Then we might be able to correlate where all the kernel heap data went?
> 
> Thanks for the instructions.  Would people please try that and report
> the results?
> 


-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 18:08                         ` Jean Louis
  2020-11-25 18:51                           ` Trevor Bentley
@ 2020-11-25 19:01                           ` Carlos O'Donell
  1 sibling, 0 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-25 19:01 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen, Trevor Bentley

On 11/25/20 1:08 PM, Jean Louis wrote:
> * Carlos O'Donell <carlos@redhat.com> [2020-11-25 20:45]:
>> With glibc-malloc-trace-utils you can try to do that with:
>>
>> LD_PRELOAD=libmtrace.so \
>> MTRACE_CTL_FILE=/home/user/app.mtr \
>> MTRACE_CTL_BACKTRACE=1 \
>> ./app
>>
>> This will use libgcc's unwinder to get a copy of the malloc caller
>> address and then we'll have to decode that based on a
>> /proc/self/maps.
> 
> I will also try that in the next session.
> 
> One problem I have here is that since I run this session I have not
> get any problem. My uptime is over 2 days, I have not changed my
> habbits of work within Emacs and my swap remains under 200 MB and only
> 10% memory used by Emacs, normally 80-90%
> 
> Almost by the rule I could not run longer than 1 day until I would get
> swap of about 3 GB - 4 GB and not responsive Emacs.
> 
> Can it be that libmtrace.so could prevent something happening what is
> normally happening?

It could. If there are timing sensitivities to this issue then it might
be sufficiently perturbed that it doesn't reproduce. The above backtracing
is expensive and increases the performance impact. However, given that
we want to know who the caller was and determine the source of the 4.2GiB
allocations... we need to try capture that information.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 18:51                           ` Trevor Bentley
@ 2020-11-25 19:02                             ` Carlos O'Donell
  2020-11-25 19:17                               ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-25 19:02 UTC (permalink / raw)
  To: Trevor Bentley, Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen

On 11/25/20 1:51 PM, Trevor Bentley wrote:
> I also still hit it while running under Valgrind; the whole emacs
> session was slow as hell, but still managed to blow out its heap in a
> few days.  Of course, libmtrace could be different, but at least it
> doesn't seem to be a heisenbug.

Do you have a valgrind report to share?

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 17:47                             ` Eli Zaretskii
@ 2020-11-25 19:06                               ` Trevor Bentley
  2020-11-25 19:22                                 ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-25 19:06 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:

> Then I don't think we will be able to understand what consumes 
> memory at such high rate without some debugging.  Have you 
> considered using breakpoints and collecting backtraces, as I 
> suggested earlier? 

Next up will be libmtrace, and then I can look into gdb.  It's 
going to be really noisy... we'll see how it goes.
 
> 
> So the result of GC shows only tells you how much of the memory 
> was freed but NOT returned to glibc, it doesn't show how much 
> was actually free'd. 
> 
>> I'm wondering how to figure out how much memory a call to 
>> (garbage-collect) has actually freed.  Possibly a sort of "dry 
>> run" where it performs the GC algorithm, but doesn't release 
>> any memory. 
> 
> "Freed" in what sense? returned to glibc? 

I was referring to glibc malloc/free, but emacs internal 
allocations would also be interesting.  It's a moot point, as I 
don't think emacs supports it.  In short, the question is "what 
has garbage-collect done?"  It prints the state of memory after it 
is finished, but I have no idea if it has actually "collected" 
anything.
 
>> As far as I understand, garbage collection is supposed to 
>> happen  automatically during idle.  I would certainly notice if 
>> it locked  up the whole instance for 10 minutes from an idle 
>> GC.  I think  this means the automatic garbage collection is 
>> either not  happening, or running on a different thread, or 
>> being interrupted,  or simply works differently.  I have no 
>> idea, hence asking you :) 
> 
> That is very strange.  There's only one function to perform GC, 
> and it is called both from garbage-collect and from an internal 
> function called when Emacs is idle or when it calls interpreter 
> functions like 'eval' or 'funcall'.  The only thing 
> garbage-collect does that the internal function doesn't is 
> generate the list that is the return value of garbage-collect, 
> but that cannot possibly take minutes. 
> 
> I suggest to set garbage-collection-messages non-nil, then you 
> should see when each GC, whether the one you invoke 
> interactively or the automatic one, starts and ends.  maybe the 
> minutes you wait are not directly related to GC, but to 
> something else that is triggered by GC? 

I just set garbage-collection-messages to non-nil and evaluated 
(garbage-collect), and nothing was printed... you are suggesting 
that it should print something to *Messages*, right?

I've never tried emacs's profiler.  I'll try that next time I do a 
big garbage-collect and see what it shows.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 18:57                           ` Carlos O'Donell
@ 2020-11-25 19:13                             ` Eli Zaretskii
  0 siblings, 0 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-25 19:13 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Cc: trevor@trevorbentley.com, bugs@gnu.support, fweimer@redhat.com,
>  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Wed, 25 Nov 2020 13:57:34 -0500
> 
> There are 3 malloc_info traces in the log.
> 
> 1. Lines 47-219. Day 1: 1100MiB of RSS.
> 2. Lines 386-556. Day 4: 2.3GiB of RSS.
> 3. Lines 744-792. Day 5: 4.2GiB of RSS.
> 
> Lines are numbered for the log starting at 1.
>  
> > To make sure there are no misunderstandings, I'm talking about this
> > part of the log:
> 
> Your analysis is for trace #2, lines 386-556.
> 
> My analysis was for trace #3, lines 744-792.

OK, thanks for clarifying my confusion.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 19:02                             ` Carlos O'Donell
@ 2020-11-25 19:17                               ` Trevor Bentley
  2020-11-25 20:51                                 ` Carlos O'Donell
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-25 19:17 UTC (permalink / raw)
  To: Carlos O'Donell, Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen

Carlos O'Donell <carlos@redhat.com> writes:

> On 11/25/20 1:51 PM, Trevor Bentley wrote: 
>> I also still hit it while running under Valgrind; the whole 
>> emacs session was slow as hell, but still managed to blow out 
>> its heap in a few days.  Of course, libmtrace could be 
>> different, but at least it doesn't seem to be a heisenbug. 
> 
> Do you have a valgrind report to share? 

Yes, they were earlier in this bug report, perhaps before you 
joined.  It was the 'massif' heap tracing tool from the valgrind 
suite, not the regular valgrind leak detector.

Here are the links again:

  The raw massif output: 
 
  http://trevorbentley.com/massif.out.3364630 
 
  The *full* tree output: 
 
  http://trevorbentley.com/ms_print.3364630.txt 
 
  The tree output showing only entries above 10% usage: 
 
  http://trevorbentley.com/ms_print.thresh10.3364630.txt

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 19:06                               ` Trevor Bentley
@ 2020-11-25 19:22                                 ` Eli Zaretskii
  2020-11-25 19:38                                   ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-25 19:22 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de, carlos@redhat.com
> Cc: 
> Date: Wed, 25 Nov 2020 20:06:21 +0100
> 
> > "Freed" in what sense? returned to glibc? 
> 
> I was referring to glibc malloc/free, but emacs internal 
> allocations would also be interesting.  It's a moot point, as I 
> don't think emacs supports it.  In short, the question is "what 
> has garbage-collect done?"  It prints the state of memory after it 
> is finished, but I have no idea if it has actually "collected" 
> anything.

GC always frees something, don't worry about that.  Your chances of
finding Emacs in a state that it has no garbage to free are nil.

> I just set garbage-collection-messages to non-nil and evaluated 
> (garbage-collect), and nothing was printed...

??? really?  That can only happen if memory-full is non-nil.  Is it?

> you are suggesting that it should print something to *Messages*,
> right?

No, in the echo area.  these messages don't go to *Messages*.

> I've never tried emacs's profiler.  I'll try that next time I do a 
> big garbage-collect and see what it shows.

That won't help in this case: GC is in C, and the profiler doesn't
profile C code that is not exposed to Lisp.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 19:22                                 ` Eli Zaretskii
@ 2020-11-25 19:38                                   ` Trevor Bentley
  2020-11-25 20:02                                     ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-25 19:38 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:

>> you are suggesting that it should print something to 
>> *Messages*, right? 
> 
> No, in the echo area.  these messages don't go to *Messages*. 

Oh!  Well, yes, it is there then.  I didn't realize you can echo 
without going to *Messages*.  It's extremely fleeting... is there 
some way to persist these messages?
 
>> I've never tried emacs's profiler.  I'll try that next time I 
>> do a  big garbage-collect and see what it shows. 
> 
> That won't help in this case: GC is in C, and the profiler 
> doesn't profile C code that is not exposed to Lisp. 

Ah, ok.  Well, I'll try it anyway, and expect nothing.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 19:38                                   ` Trevor Bentley
@ 2020-11-25 20:02                                     ` Eli Zaretskii
  2020-11-25 20:43                                       ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-25 20:02 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de, carlos@redhat.com
> Cc: 
> Date: Wed, 25 Nov 2020 20:38:38 +0100
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> you are suggesting that it should print something to 
> >> *Messages*, right? 
> > 
> > No, in the echo area.  these messages don't go to *Messages*. 
> 
> Oh!  Well, yes, it is there then.  I didn't realize you can echo 
> without going to *Messages*.  It's extremely fleeting... is there 
> some way to persist these messages?

But if GC is taking minutes, you should be seeing the first of these 2
messages sitting in the echo area for the full duration of those
minutes.  So how can they be so ephemeral in your case?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 20:02                                     ` Eli Zaretskii
@ 2020-11-25 20:43                                       ` Trevor Bentley
  0 siblings, 0 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-11-25 20:43 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:

>> Oh!  Well, yes, it is there then.  I didn't realize you can 
>> echo  without going to *Messages*.  It's extremely 
>> fleeting... is there  some way to persist these messages? 
> 
> But if GC is taking minutes, you should be seeing the first of 
> these 2 messages sitting in the echo area for the full duration 
> of those minutes.  So how can they be so ephemeral in your case? 

Yes, for the long ones I expect to see the message hang in the 
echo area.  I was just hoping to also see when it is GC'ing in 
general (if it is GCi'ng in general, since it's behaving so 
weirdly).  A timestamped log of every time garbage-collect runs 
would be great.  Maybe I can do that with "(add-function :around 
...)".

The long garbage-collect doesn't happen until I'm in exploding 
memory mode.  I recently restarted emacs, so right now a GC is 
instantaneous.  I'll let you know how it goes next time the memory 
runs away.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 19:17                               ` Trevor Bentley
@ 2020-11-25 20:51                                 ` Carlos O'Donell
  2020-11-26 13:58                                   ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-25 20:51 UTC (permalink / raw)
  To: Trevor Bentley, Jean Louis; +Cc: fweimer, 43389, dj, michael_heerdegen

On 11/25/20 2:17 PM, Trevor Bentley wrote:
> Carlos O'Donell <carlos@redhat.com> writes:
> 
>> On 11/25/20 1:51 PM, Trevor Bentley wrote:
>>> I also still hit it while running under Valgrind; the whole emacs session was slow as hell, but still managed to blow out its heap in a few days.  Of course, libmtrace could be different, but at least it doesn't seem to be a heisenbug. 
>>
>> Do you have a valgrind report to share? 
> 
> Yes, they were earlier in this bug report, perhaps before you joined.  It was the 'massif' heap tracing tool from the valgrind suite, not the regular valgrind leak detector.
> 
> Here are the links again:
> 
>  The raw massif output:
>  http://trevorbentley.com/massif.out.3364630
>  The *full* tree output:
>  http://trevorbentley.com/ms_print.3364630.txt
>  The tree output showing only entries above 10% usage:
>  http://trevorbentley.com/ms_print.thresh10.3364630.txt

This data is pretty clear:

 1.40GiB - lisp_align_malloc (alloc.c:1195)
 1.40GiB - lmalloc (alloc.c:1359)
 0.65GiB - lrealloc (alloc.c:1374)
 0.24GiB - AcquireAlignedMemory (/usr/lib/libMagickCore-7.Q16HDRI.so.7.0.0)
--------
 3.60Gib - In use as of the snapshot.

That's a fairly high fraction of the ~4.2GiB that is eventually in use.

With lisp_align_malloc, lmalloc, and lrealloc shooting up exponentially at the end of the run look like they are making lists and processing numbers and other objects.

This is a direct expression of something increasing demand for memory.
	
-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 18:03                         ` Eli Zaretskii
  2020-11-25 18:57                           ` Carlos O'Donell
@ 2020-11-26  9:09                           ` Jean Louis
  2020-11-26 14:13                             ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-26  9:09 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, Carlos O'Donell, trevor,
	michael_heerdegen

Hello Eli,

Here is short report on the behavior:

Emacs uptime: 2 days, 19 hours, 46 minutes, 49 seconds

I think it was 11:12 o'clock my time zone. I was not doing nothing
special just writing emails and invoking emacsclient. All the time
before the swap shown by symon-mode was just about 200 MB suddenly it
grew to large number maybe few gigabytes, hard disk started working
heavily. It became all very slow but I could write letters.

I have tried to invoke M-x good-bye around 11:12, that is where it
became all very slow and started working with hard disk. Almost
everything blocked on screen. Emacs was kind of empty, no menus,
nothing, just blank black background, no mode line. So I moved it to
other workspace and continued working with zile.

About 36 minutes later it finally wrote this information into file:

((uptime "2 days, 18 hours, 32 minutes, 32 seconds") (pid 13339) (garbage ((conses 16 4438358 789442) (symbols 48 86924 25) (strings 32 571988 149785) (string-bytes 1 25104928) (vectors 16 245282) (vector-slots 8 4652918 1622184) (floats 8 1860 19097) (intervals 56 645336 37479) (buffers 992 900))) (buffers-size 200839861) (vsize (vsize 5144252)))

There after few minutes I have invoked the good-bye again:

((uptime "2 days, 18 hours, 35 minutes, 19 seconds") (pid 13339) (garbage ((conses 16 4511014 617524) (symbols 48 86926 23) (strings 32 576134 114546) (string-bytes 1 25198549) (vectors 16 245670) (vector-slots 8 4636183 1560354) (floats 8 1859 18842) (intervals 56 655325 24178) (buffers 992 900))) (buffers-size 200898858) (vsize (vsize 5144252)))

But what happened after 36 minutes of waiting is that Emacs became
responsive. So I am still running this session and I hope to get
mtrace after the session has finished.

Before I was not patient longer than maybe 3-5 minutes and I have
aborted Emacs. But now I can see it stabilized after hard work with
memory or whatever it was doing. Swap is 1809 MB and vsize just same
as above.

Observation on "what I was doing when vsize started growing" is
simple, I was just editing email, nothing drastic. I did not do
anything special.

If you say I should finish session now and send the mtrace, I can do
it.

Jean


(defun good-bye ()
  (interactive)
  (let* ((garbage (garbage-collect))
	 (size 0)
	 (buffers-size (dolist (buffer (buffer-list) size)
			(setq size (+ size (buffer-size buffer)))))
	 (uptime (emacs-uptime))
	 (pid (emacs-pid))
	 (vsize (vsize-value))
	 (file (format "~/tmp/emacs-session-%s.el" pid))
	 (list (list (list 'uptime uptime) (list 'pid pid)
		     (list 'garbage garbage) (list 'buffers-size buffers-size)
		     (list 'vsize vsize))))
    (with-temp-file file
      (insert (prin1-to-string list)))
    (message file)))







^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 17:45                       ` Carlos O'Donell
  2020-11-25 18:03                         ` Eli Zaretskii
  2020-11-25 18:08                         ` Jean Louis
@ 2020-11-26 12:37                         ` Trevor Bentley
  2020-11-26 14:30                           ` Eli Zaretskii
  2 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-26 12:37 UTC (permalink / raw)
  To: Carlos O'Donell, Eli Zaretskii
  Cc: fweimer, 43389, bugs, dj, michael_heerdegen

> You want visibility into what is USING that memory. 
> 
> With glibc-malloc-trace-utils you can try to do that with: 
> 
> LD_PRELOAD=libmtrace.so \ MTRACE_CTL_FILE=/home/user/app.mtr \ 
> MTRACE_CTL_BACKTRACE=1 \ ./app 
> 
> This will use libgcc's unwinder to get a copy of the malloc 
> caller address and then we'll have to decode that based on a 
> /proc/self/maps. 
> 
> Next steps: - Get a glibc-malloc-trace-utils trace of the 
> application ratcheting.  - Get a copy of /proc/$PID/maps for the 
> application (shorter version of smaps). 
> 

Oh, this is going to be a problem.  I guess it is producing one 
trace file per thread?

I ran it with libmtrace overnight.  Memory usage was very high, 
but it doesn't look like the same problem.  I hit 1550MB of RSS, 
but smaps reported only ~350MB of that was in the heap, which 
seemed reasonable for the ~150MB that emacs reported it was using. 
Does libmtrace add a lot of memory overhead?

However, libmtrace has made 4968 files totalling 26GB in that 
time.  Ouch.

It's going to be hard to tell when I hit the bug under libmtrace, 
questionable whether the report will even fit on my disk, and 
tricky to share however many tens of gigabytes of trace files it 
results in.

If it's one trace per thread, though, then we at least know that 
my emacs process in question is blazing through threads.  That 
could be relevant.

Other thing to note (for Eli): I wrapped garbage-collect like so:

---
(defun trev/garbage-collect (orig-fun &rest args) 
  (message "%s -- Starting garbage-collect." 
  (current-time-string)) (let ((time (current-time)) 
        (result (apply orig-fun args))) 
    (message "%s -- Finished garbage-collect in %.06f" 
    (current-time-string) (float-time (time-since time))) result)) 
(add-function :around (symbol-function 'garbage-collect) 
#'trev/garbage-collect)
---

This printed a start and stop message each time I evaluated 
garbage-collect manually.  It did not print any messages in 11 
hours of running unattended.  This is with an active network 
connection receiving messages fairly frequently, so there was 
plenty of consing going on.  Hard for me to judge if it should run 
any garbage collection in that time, but I would have expected so.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-25 20:51                                 ` Carlos O'Donell
@ 2020-11-26 13:58                                   ` Eli Zaretskii
  2020-11-26 20:21                                     ` Carlos O'Donell
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-26 13:58 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Cc: Eli Zaretskii <eliz@gnu.org>, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Wed, 25 Nov 2020 15:51:16 -0500
> 
> >  The raw massif output:
> >  http://trevorbentley.com/massif.out.3364630
> >  The *full* tree output:
> >  http://trevorbentley.com/ms_print.3364630.txt
> >  The tree output showing only entries above 10% usage:
> >  http://trevorbentley.com/ms_print.thresh10.3364630.txt
> 
> This data is pretty clear:
> 
>  1.40GiB - lisp_align_malloc (alloc.c:1195)
>  1.40GiB - lmalloc (alloc.c:1359)
>  0.65GiB - lrealloc (alloc.c:1374)
>  0.24GiB - AcquireAlignedMemory (/usr/lib/libMagickCore-7.Q16HDRI.so.7.0.0)
> --------
>  3.60Gib - In use as of the snapshot.
> 
> That's a fairly high fraction of the ~4.2GiB that is eventually in use.
> 
> With lisp_align_malloc, lmalloc, and lrealloc shooting up exponentially at the end of the run look like they are making lists and processing numbers and other objects.
> 
> This is a direct expression of something increasing demand for memory.

So, at least in Trevor's case, it sounds like we sometimes request a
lot of memory during short periods of time.  But what kind of memory
is that?

lmalloc is called by xmalloc, xrealloc, xzalloc, and xpalloc --
functions Emacs calls to get memory unrelated to Lisp data.  But it is
also called by lisp_malloc, which is used to allocate memory for some
Lisp objects.  lisp_align_malloc, OTOH, is used exclusively for
allocating Lisp data (conses, strings, etc.).

It is somewhat strange that lisp_align_malloc and lmalloc were called
to allocate similar amounts of memory: these two functions are
orthogonal, AFAICS, used for disparate groups of Lisp object types,
and it sounds strange that we somehow allocate very similar amounts of
memory for those data types.

Another observation is that since GC succeeds to release a large
portion of this memory, it would probably mean some significant
proportion of the calls are for Lisp data, maybe strings (because GC
compacts strings, which can allow Emacs to release more memory to
glibc's heap allocation machinery).

Apart of that, I think we really need to see the most significant
customers of these functions when the memory footprint starts growing
fast.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26  9:09                           ` Jean Louis
@ 2020-11-26 14:13                             ` Eli Zaretskii
  2020-11-26 18:37                               ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-26 14:13 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

> Date: Thu, 26 Nov 2020 12:09:32 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: Carlos O'Donell <carlos@redhat.com>, trevor@trevorbentley.com,
>   fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
>   michael_heerdegen@web.de
> 
> ((uptime "2 days, 18 hours, 35 minutes, 19 seconds") (pid 13339) (garbage ((conses 16 4511014 617524) (symbols 48 86926 23) (strings 32 576134 114546) (string-bytes 1 25198549) (vectors 16 245670) (vector-slots 8 4636183 1560354) (floats 8 1859 18842) (intervals 56 655325 24178) (buffers 992 900))) (buffers-size 200898858) (vsize (vsize 5144252)))
> 
> But what happened after 36 minutes of waiting is that Emacs became
> responsive. So I am still running this session and I hope to get
> mtrace after the session has finished.
> 
> Before I was not patient longer than maybe 3-5 minutes and I have
> aborted Emacs. But now I can see it stabilized after hard work with
> memory or whatever it was doing. Swap is 1809 MB and vsize just same
> as above.

It's still 5GB, which is a fairly large footprint, certainly for a
2-day session.

> Observation on "what I was doing when vsize started growing" is
> simple, I was just editing email, nothing drastic. I did not do
> anything special.

Can you describe in more detail how you edit email?  Which email
package(s) do you do, and what would composing email generally
involve?

Also, are there any background activities that routinely run in your
Emacs sessions?

> If you say I should finish session now and send the mtrace, I can do
> it.

That's for Carlos to say.

Thanks for the info.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 12:37                         ` Trevor Bentley
@ 2020-11-26 14:30                           ` Eli Zaretskii
  2020-11-26 15:19                             ` Trevor Bentley
  2020-11-26 18:25                             ` Jean Louis
  0 siblings, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-26 14:30 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de
> Cc: 
> Date: Thu, 26 Nov 2020 13:37:54 +0100
> 
> If it's one trace per thread, though, then we at least know that 
> my emacs process in question is blazing through threads.

I don't see how this could be true, unless some library you use
(ImageMagick?) starts a lot of threads.  Emacs itself is
single-threaded, and the only other threads are those from GTK, which
should be very few (like, 4 or 5).  This assumes you didn't use Lisp
threads, of course.

> Other thing to note (for Eli): I wrapped garbage-collect like so:
> 
> ---
> (defun trev/garbage-collect (orig-fun &rest args) 
>   (message "%s -- Starting garbage-collect." 
>   (current-time-string)) (let ((time (current-time)) 
>         (result (apply orig-fun args))) 
>     (message "%s -- Finished garbage-collect in %.06f" 
>     (current-time-string) (float-time (time-since time))) result)) 
> (add-function :around (symbol-function 'garbage-collect) 
> #'trev/garbage-collect)
> ---
> 
> This printed a start and stop message each time I evaluated 
> garbage-collect manually.  It did not print any messages in 11 
> hours of running unattended.

That's expected, because the automatic GC doesn't call
garbage-collect.  garbage-collect is just a thin wrapper around a C
function, called garbage_collect, and the automatic GC calls that
function directly from C.  And you cannot advise C functions not
exposed to Lisp.

If you want to have record of the times it took each GC to run, you
will have to modify the C sources.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 14:30                           ` Eli Zaretskii
@ 2020-11-26 15:19                             ` Trevor Bentley
  2020-11-26 15:31                               ` Eli Zaretskii
  2020-11-27  4:54                               ` Carlos O'Donell
  2020-11-26 18:25                             ` Jean Louis
  1 sibling, 2 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-11-26 15:19 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

>> If it's one trace per thread, though, then we at least know 
>> that  my emacs process in question is blazing through threads. 
> 
> I don't see how this could be true, unless some library you use 
> (ImageMagick?) starts a lot of threads.  Emacs itself is 
> single-threaded, and the only other threads are those from GTK, 
> which should be very few (like, 4 or 5).  This assumes you 
> didn't use Lisp threads, of course. 

Oh, it may be subprocesses instead of threads.  emacs-slack is 
doing all sorts of things, involving both ImageMagick and 
launching curl subprocesses.  Is there a way to prevent libmtrace 
from following children?

I've just hooked make-process and make-thread, and see both being 
called back-to-back very often for spawning curl subprocesses.
 
>> This printed a start and stop message each time I evaluated 
>> garbage-collect manually.  It did not print any messages in 11 
>> hours of running unattended. 
> 
> That's expected, because the automatic GC doesn't call 
> garbage-collect.  garbage-collect is just a thin wrapper around 
> a C function, called garbage_collect, and the automatic GC calls 
> that function directly from C.  And you cannot advise C 
> functions not exposed to Lisp. 
> 
> If you want to have record of the times it took each GC to run, 
> you will have to modify the C sources. 

Gotcha.  No surprise, then.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 15:19                             ` Trevor Bentley
@ 2020-11-26 15:31                               ` Eli Zaretskii
  2020-11-27  4:54                               ` Carlos O'Donell
  1 sibling, 0 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-26 15:31 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: carlos@redhat.com, bugs@gnu.support, fweimer@redhat.com,
>  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
> Cc: 
> Date: Thu, 26 Nov 2020 16:19:53 +0100
> 
> I've just hooked make-process and make-thread, and see both being 
> called back-to-back very often for spawning curl subprocesses.

What Lisp commands cause make-thread to be called?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 14:30                           ` Eli Zaretskii
  2020-11-26 15:19                             ` Trevor Bentley
@ 2020-11-26 18:25                             ` Jean Louis
  2020-11-27  4:55                               ` Carlos O'Donell
  1 sibling, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-26 18:25 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, carlos, Trevor Bentley, michael_heerdegen

My mtrace files do not have the PID from Emacs. It got lost maybe
because I killed Emacs. There are many other PID files. Or maybe
initial PID file was based by the script that run it.

Should I provide mtrace files which do not have emacs PID?






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 14:13                             ` Eli Zaretskii
@ 2020-11-26 18:37                               ` Jean Louis
  2020-11-27  5:08                                 ` Carlos O'Donell
  0 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-11-26 18:37 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

* Eli Zaretskii <eliz@gnu.org> [2020-11-26 17:14]:
> > Date: Thu, 26 Nov 2020 12:09:32 +0300
> > From: Jean Louis <bugs@gnu.support>
> > Cc: Carlos O'Donell <carlos@redhat.com>, trevor@trevorbentley.com,
> >   fweimer@redhat.com, 43389@debbugs.gnu.org, dj@redhat.com,
> >   michael_heerdegen@web.de
> > 
> > ((uptime "2 days, 18 hours, 35 minutes, 19 seconds") (pid 13339) (garbage ((conses 16 4511014 617524) (symbols 48 86926 23) (strings 32 576134 114546) (string-bytes 1 25198549) (vectors 16 245670) (vector-slots 8 4636183 1560354) (floats 8 1859 18842) (intervals 56 655325 24178) (buffers 992 900))) (buffers-size 200898858) (vsize (vsize 5144252)))
> > 
> > But what happened after 36 minutes of waiting is that Emacs became
> > responsive. So I am still running this session and I hope to get
> > mtrace after the session has finished.
> > 
> > Before I was not patient longer than maybe 3-5 minutes and I have
> > aborted Emacs. But now I can see it stabilized after hard work with
> > memory or whatever it was doing. Swap is 1809 MB and vsize just same
> > as above.
> 
> It's still 5GB, which is a fairly large footprint, certainly for a
> 2-day session.

And this time I could observe it was quick to reach, like from some
200 MB swap reported it grew to few gigabytes in few minutes.

> > Observation on "what I was doing when vsize started growing" is
> > simple, I was just editing email, nothing drastic. I did not do
> > anything special.
> 
> Can you describe in more detail how you edit email?  Which email
> package(s) do you do, and what would composing email generally
> involve?

I was using XTerm invoked from outside with mutt. Mutt invokes
emacsclient and it uses normally same frame, but sometimes other
frame. Default setting is to use new frame, but I sometimes change to
invoke it without creating new frame.

There are 2 modules vterm that I load and emacs-libpq for database.

> Also, are there any background activities that routinely run in your
> Emacs sessions?

Jabber doing XMPP without problem before, persistent scratch,
symon-mode, helm, sql-postgres mode, there is eshell always invoked
and shell.

Timers now:
               5.0s            - undo-auto--boundary-timer
              10.1s        30.0s jabber-whitespace-ping-do
              18.8s      1m 0.0s display-time-event-handler
           4m 49.4s      5m 0.0s persistent-scratch-save
          31m 10.9s   1h 0m 0.0s url-cookie-write-file
   *           0.1s            t show-paren-function
   *           0.5s      :repeat blink-cursor-start
   *           0.5s            t #f(compiled-function () #<bytecode 0x23a02dfeda0a1d> [jit-lock--antiblink-grace-timer jit-lock-context-fontify])
   *           1.0s            - helm-ff--cache-mode-refresh
   *           2.0s            t jabber-activity-clean

> > If you say I should finish session now and send the mtrace, I can do
> > it.
> 
> That's for Carlos to say.
> 
> Thanks for the info.

That session after some time invoked much harder hard disk swapping
and I have killed Emacs. But I could not find mtrace with
corresponding PID for that Emacs session

For this session I can see the corresponding PID on the disk. I am now
at 8 hours session. Once finishes I hope that mtrace file will not be
deleted even if I kill Emacs.

((uptime "8 hours, 8 minutes, 11 seconds") (pid 7385) (garbage ((conses 16 1032190 170175) (symbols 48 49048 11) (strings 32 252789 45307) (string-bytes 1 8153413) (vectors 16 84232) (vector-slots 8 1713735 81778) (floats 8 690 1822) (intervals 56 68015 4240) (buffers 984 105))) (buffers-size 3632683) (vsize (vsize 1217088)))





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 13:58                                   ` Eli Zaretskii
@ 2020-11-26 20:21                                     ` Carlos O'Donell
  2020-11-26 20:30                                       ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-26 20:21 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

On 11/26/20 8:58 AM, Eli Zaretskii wrote:
> Apart of that, I think we really need to see the most significant
> customers of these functions when the memory footprint starts growing
> fast.
 
It's in the mastiff captured data.

Of the 1.7GiB it's all in Fcons:

448.2 MiB: Fmake_list
270.3 MiB: in 262 places all over the place (below massif's threshold)
704.0 MiB: list4 -> exec_byte_code
109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
102.2 MiB: Flist -> exec_byte_code ...
 68.5 MiB: Fcopy_alist -> Fframe_parameters ...

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 20:21                                     ` Carlos O'Donell
@ 2020-11-26 20:30                                       ` Eli Zaretskii
  2020-11-27  5:04                                         ` Carlos O'Donell
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-26 20:30 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Cc: trevor@trevorbentley.com, bugs@gnu.support, fweimer@redhat.com,
>  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Thu, 26 Nov 2020 15:21:04 -0500
> 
> On 11/26/20 8:58 AM, Eli Zaretskii wrote:
> > Apart of that, I think we really need to see the most significant
> > customers of these functions when the memory footprint starts growing
> > fast.
>  
> It's in the mastiff captured data.
> 
> Of the 1.7GiB it's all in Fcons:
> 
> 448.2 MiB: Fmake_list
> 270.3 MiB: in 262 places all over the place (below massif's threshold)
> 704.0 MiB: list4 -> exec_byte_code
> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
> 102.2 MiB: Flist -> exec_byte_code ...
>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...

Thanks.  Those are the low-level primitives, they tell nothing about
the Lisp code which caused this much memory allocation.  We need
higher levels of callstack, and preferably in Lisp terms.  GDB
backtraces would show them, due to tailoring in src/.gdbinit.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 15:19                             ` Trevor Bentley
  2020-11-26 15:31                               ` Eli Zaretskii
@ 2020-11-27  4:54                               ` Carlos O'Donell
  2020-11-27  8:44                                 ` Jean Louis
  1 sibling, 1 reply; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-27  4:54 UTC (permalink / raw)
  To: Trevor Bentley, Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen

On 11/26/20 10:19 AM, Trevor Bentley wrote:
>>> If it's one trace per thread, though, then we at least know that
>>> my emacs process in question is blazing through threads.
>> 
>> I don't see how this could be true, unless some library you use
>> (ImageMagick?) starts a lot of threads.  Emacs itself is
>> single-threaded, and the only other threads are those from GTK,
>> which should be very few (like, 4 or 5).  This assumes you didn't
>> use Lisp threads, of course.
> 
> Oh, it may be subprocesses instead of threads.  emacs-slack is doing
> all sorts of things, involving both ImageMagick and launching curl
> subprocesses.  Is there a way to prevent libmtrace from following
> children?

Each process generates a trace, and that trace contains the data for
all threads in the process.

I've just pushed MTRACE_CTL_CHILDREN, set that to 0 and the children
will not trace. Thanks for the feedback and enhancement.

commit 8a88a4840b5a573c50264f04f68f71d0496913d3
Author: Carlos O'Donell <carlos@redhat.com>
Date:   Thu Nov 26 23:50:57 2020 -0500

    mtrace: Add support for MTRACE_CTL_CHILDREN.
    
    Allow the tracer to only trace the parent process and disable
    tracing in all child processes unless those processes choose
    to programmatically re-eanble tracing via the exposed API.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 18:25                             ` Jean Louis
@ 2020-11-27  4:55                               ` Carlos O'Donell
  0 siblings, 0 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-27  4:55 UTC (permalink / raw)
  To: Jean Louis, Eli Zaretskii
  Cc: fweimer, 43389, Trevor Bentley, dj, michael_heerdegen

On 11/26/20 1:25 PM, Jean Louis wrote:
> My mtrace files do not have the PID from Emacs. It got lost maybe
> because I killed Emacs. There are many other PID files. Or maybe
> initial PID file was based by the script that run it.
> 
> Should I provide mtrace files which do not have emacs PID?
 
Each PID is from a spawned subprocess.

I've just pushed new code to the tracer to allow you to do:
MTRACE_CTL_CHILDREN=0 to avoid tracing the spawned child
processes.

We would only want the mtrace file for the emacs PID (all
contained threads store to that file).

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 20:30                                       ` Eli Zaretskii
@ 2020-11-27  5:04                                         ` Carlos O'Donell
  2020-11-27  7:40                                           ` Eli Zaretskii
  2020-11-27 15:33                                           ` Eli Zaretskii
  0 siblings, 2 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-27  5:04 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

On 11/26/20 3:30 PM, Eli Zaretskii wrote:
>> Cc: trevor@trevorbentley.com, bugs@gnu.support, fweimer@redhat.com,
>>  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
>> From: Carlos O'Donell <carlos@redhat.com>
>> Date: Thu, 26 Nov 2020 15:21:04 -0500
>>
>> On 11/26/20 8:58 AM, Eli Zaretskii wrote:
>>> Apart of that, I think we really need to see the most significant
>>> customers of these functions when the memory footprint starts growing
>>> fast.
>>  
>> It's in the mastiff captured data.
>>
>> Of the 1.7GiB it's all in Fcons:
>>
>> 448.2 MiB: Fmake_list
>> 270.3 MiB: in 262 places all over the place (below massif's threshold)
>> 704.0 MiB: list4 -> exec_byte_code
>> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
>> 102.2 MiB: Flist -> exec_byte_code ...
>>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...
> 
> Thanks.  Those are the low-level primitives, they tell nothing about
> the Lisp code which caused this much memory allocation.  We need
> higher levels of callstack, and preferably in Lisp terms.  GDB
> backtraces would show them, due to tailoring in src/.gdbinit.

Sure, let me pick one for you:

lisp_align_malloc (alloc.c:1195)
 Fcons (alloc.c:2694)
  concat (fns.c:730)
   Fcopy_sequence (fns.c:598)
    timer_check (keyboard.c:4395)
     wait_reading_process_output (process.c:5334)
      sit_for (dispnew.c:6056)
       read_char (keyboard.c:2742)
        read_key_sequence (keyboard.c:9551)
         command_loop_1 (keyboard.c:1354)
          internal_condition_case (eval.c:1365)
           command_loop_2 (keyboard.c:1095)
            internal_catch (eval.c:1126)
             command_loop (keyboard.c:1074)
              recursive_edit_1 (keyboard.c:718)
               Frecursive_edit (keyboard.c:790)
                main (emacs.c:2080)
 
There is a 171MiB's worth of allocations in that path.

There are a lot of traces ending in wait_reading_process_output that
are consuming 50MiB.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-26 18:37                               ` Jean Louis
@ 2020-11-27  5:08                                 ` Carlos O'Donell
  0 siblings, 0 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-11-27  5:08 UTC (permalink / raw)
  To: Jean Louis, Eli Zaretskii; +Cc: fweimer, 43389, trevor, dj, michael_heerdegen

On 11/26/20 1:37 PM, Jean Louis wrote:
> For this session I can see the corresponding PID on the disk. I am now
> at 8 hours session. Once finishes I hope that mtrace file will not be
> deleted even if I kill Emacs.

Nothing should be deleting the on-disk traces.

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27  5:04                                         ` Carlos O'Donell
@ 2020-11-27  7:40                                           ` Eli Zaretskii
  2020-11-27  7:52                                             ` Eli Zaretskii
  2020-11-27 15:33                                           ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-27  7:40 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Cc: trevor@trevorbentley.com, bugs@gnu.support, fweimer@redhat.com,
>  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Fri, 27 Nov 2020 00:04:56 -0500
> 
> >> 448.2 MiB: Fmake_list
> >> 270.3 MiB: in 262 places all over the place (below massif's threshold)
> >> 704.0 MiB: list4 -> exec_byte_code
> >> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
> >> 102.2 MiB: Flist -> exec_byte_code ...
> >>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...
> > 
> > Thanks.  Those are the low-level primitives, they tell nothing about
> > the Lisp code which caused this much memory allocation.  We need
> > higher levels of callstack, and preferably in Lisp terms.  GDB
> > backtraces would show them, due to tailoring in src/.gdbinit.
> 
> Sure, let me pick one for you:
> 
> lisp_align_malloc (alloc.c:1195)
>  Fcons (alloc.c:2694)
>   concat (fns.c:730)
>    Fcopy_sequence (fns.c:598)
>     timer_check (keyboard.c:4395)
>      wait_reading_process_output (process.c:5334)
>       sit_for (dispnew.c:6056)
>        read_char (keyboard.c:2742)
>         read_key_sequence (keyboard.c:9551)
>          command_loop_1 (keyboard.c:1354)
>           internal_condition_case (eval.c:1365)
>            command_loop_2 (keyboard.c:1095)
>             internal_catch (eval.c:1126)
>              command_loop (keyboard.c:1074)
>               recursive_edit_1 (keyboard.c:718)
>                Frecursive_edit (keyboard.c:790)
>                 main (emacs.c:2080)
>  
> There is a 171MiB's worth of allocations in that path.
> 
> There are a lot of traces ending in wait_reading_process_output that
> are consuming 50MiB.

Thanks.  If they are like the one above, the allocations are due to
some timer.  Could be jabber, I'll take a look at it.  Or maybe
helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
well.

However, GDB's backtraces are even more informative, as they show Lisp
functions invoked in-between (via exec_byte_code, funcall_subr, etc.).
These pinpoint the offending Lisp code much more accurately.  The
downside is that running with GDB stopping Emacs and emitting the
backtrace is no fun...





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27  7:40                                           ` Eli Zaretskii
@ 2020-11-27  7:52                                             ` Eli Zaretskii
  2020-11-27  8:20                                               ` Eli Zaretskii
  2020-11-28 17:31                                               ` Trevor Bentley
  0 siblings, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-27  7:52 UTC (permalink / raw)
  To: trevor; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> Date: Fri, 27 Nov 2020 09:40:53 +0200
> From: Eli Zaretskii <eliz@gnu.org>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com,
>  michael_heerdegen@web.de, trevor@trevorbentley.com
> 
> > Cc: trevor@trevorbentley.com, bugs@gnu.support, fweimer@redhat.com,
> >  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
> > From: Carlos O'Donell <carlos@redhat.com>
> > Date: Fri, 27 Nov 2020 00:04:56 -0500
> > 
> > >> 448.2 MiB: Fmake_list
> > >> 270.3 MiB: in 262 places all over the place (below massif's threshold)
> > >> 704.0 MiB: list4 -> exec_byte_code
> > >> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
> > >> 102.2 MiB: Flist -> exec_byte_code ...
> > >>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...
> > > 
> > > Thanks.  Those are the low-level primitives, they tell nothing about
> > > the Lisp code which caused this much memory allocation.  We need
> > > higher levels of callstack, and preferably in Lisp terms.  GDB
> > > backtraces would show them, due to tailoring in src/.gdbinit.
> > 
> > Sure, let me pick one for you:
> > 
> > lisp_align_malloc (alloc.c:1195)
> >  Fcons (alloc.c:2694)
> >   concat (fns.c:730)
> >    Fcopy_sequence (fns.c:598)
> >     timer_check (keyboard.c:4395)
> >      wait_reading_process_output (process.c:5334)
> >       sit_for (dispnew.c:6056)
> >        read_char (keyboard.c:2742)
> >         read_key_sequence (keyboard.c:9551)
> >          command_loop_1 (keyboard.c:1354)
> >           internal_condition_case (eval.c:1365)
> >            command_loop_2 (keyboard.c:1095)
> >             internal_catch (eval.c:1126)
> >              command_loop (keyboard.c:1074)
> >               recursive_edit_1 (keyboard.c:718)
> >                Frecursive_edit (keyboard.c:790)
> >                 main (emacs.c:2080)
> >  
> > There is a 171MiB's worth of allocations in that path.
> > 
> > There are a lot of traces ending in wait_reading_process_output that
> > are consuming 50MiB.
> 
> Thanks.  If they are like the one above, the allocations are due to
> some timer.  Could be jabber, I'll take a look at it.  Or maybe
> helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
> well.

Oops, I got this mixed up: the timer list is from Jean, but the massif
files are from Trevor.

Trevor, can you show the list of timers running on your system?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27  7:52                                             ` Eli Zaretskii
@ 2020-11-27  8:20                                               ` Eli Zaretskii
  2020-11-28  9:00                                                 ` Eli Zaretskii
  2020-11-28 17:31                                               ` Trevor Bentley
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-27  8:20 UTC (permalink / raw)
  To: carlos; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Date: Fri, 27 Nov 2020 09:52:00 +0200
> From: Eli Zaretskii <eliz@gnu.org>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com,
>  carlos@redhat.com, michael_heerdegen@web.de
> 
> > Date: Fri, 27 Nov 2020 09:40:53 +0200
> > From: Eli Zaretskii <eliz@gnu.org>
> > Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com,
> >  michael_heerdegen@web.de, trevor@trevorbentley.com
> > 
> > > lisp_align_malloc (alloc.c:1195)
> > >  Fcons (alloc.c:2694)
> > >   concat (fns.c:730)
> > >    Fcopy_sequence (fns.c:598)
> > >     timer_check (keyboard.c:4395)
> > >      wait_reading_process_output (process.c:5334)
> > >       sit_for (dispnew.c:6056)
> > >        read_char (keyboard.c:2742)
> > >         read_key_sequence (keyboard.c:9551)
> > >          command_loop_1 (keyboard.c:1354)
> > >           internal_condition_case (eval.c:1365)
> > >            command_loop_2 (keyboard.c:1095)
> > >             internal_catch (eval.c:1126)
> > >              command_loop (keyboard.c:1074)
> > >               recursive_edit_1 (keyboard.c:718)
> > >                Frecursive_edit (keyboard.c:790)
> > >                 main (emacs.c:2080)
> > >  
> > > There is a 171MiB's worth of allocations in that path.
> > > 
> > > There are a lot of traces ending in wait_reading_process_output that
> > > are consuming 50MiB.
> > 
> > Thanks.  If they are like the one above, the allocations are due to
> > some timer.  Could be jabber, I'll take a look at it.  Or maybe
> > helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
> > well.
> 
> Oops, I got this mixed up: the timer list is from Jean, but the massif
> files are from Trevor.

Double oops: the above just shows that each time we process timers, we
copy the list of the timers first.  Not sure what to do about that.
Hmm...  Maybe we should try GC at the end of each timer_check call?

Is it possible to tell how much time did it take to allocate those
171MB via the above chain of calls?  I'm trying to assess the rate of
allocations we request this way.

Each call to lisp_align_malloc above requests a 1008-byte chunk of
memory for a new block of Lisp conses.  Would it benefit us to tune
this value to a larger or smaller size, as far as glibc's malloc is
concerned?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27  4:54                               ` Carlos O'Donell
@ 2020-11-27  8:44                                 ` Jean Louis
  0 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-27  8:44 UTC (permalink / raw)
  To: Carlos O'Donell
  Cc: fweimer, 43389, bugs, dj, michael_heerdegen, Trevor Bentley

* Carlos O'Donell <carlos@redhat.com> [2020-11-27 07:54]:
> Each process generates a trace, and that trace contains the data for
> all threads in the process.
> 
> I've just pushed MTRACE_CTL_CHILDREN, set that to 0 and the children
> will not trace. Thanks for the feedback and enhancement.

Thank you, that is nice feature, I will use it for the next session.

I have finished one trace and now packing it to see if it can be packed and uploaded.

I will upload it and share the hyperlink to Carlos and Eli as private email.

Sadly I could not invoke my function M-x good-bye and I also did not
see this time problem with swapping. Problem came that I have invoked
M-x eww and was browsing and it blocked. I had to interrupt. But
nothing worked in the end and user interface became not responsive. I
could not type a key, use mouse or do anything. Hard disk was working,
not much, and not that the LED was turned on as usual continually.

I have been doing usual work, nothing special. Just using eww. Mouse
and menu did not work. M-x did not work. Interrupting with ESC man
times or C-g did not work. It worked once to get error in process
filter but after everything was blocked.

My vsize function have been showing me over 4 GB vsize value in
minibuffer. Swap size was under 200 MB this time. 

When the condition occurs that we are trying to capture my swap size
was always 2-3 GB minimum, and I have 4 GB RAM.

I had to invoke xkill to kill Emacs. Hyperlink with mtrace is coming
as soon as it hopefully gets packed better. 

Thank you,
Jean





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27  5:04                                         ` Carlos O'Donell
  2020-11-27  7:40                                           ` Eli Zaretskii
@ 2020-11-27 15:33                                           ` Eli Zaretskii
  2020-12-08 22:15                                             ` Carlos O'Donell
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-27 15:33 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Cc: trevor@trevorbentley.com, bugs@gnu.support, fweimer@redhat.com,
>  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Fri, 27 Nov 2020 00:04:56 -0500
> 
> lisp_align_malloc (alloc.c:1195)
>  Fcons (alloc.c:2694)
>   concat (fns.c:730)
>    Fcopy_sequence (fns.c:598)
>     timer_check (keyboard.c:4395)
>      wait_reading_process_output (process.c:5334)
>       sit_for (dispnew.c:6056)
>        read_char (keyboard.c:2742)
>         read_key_sequence (keyboard.c:9551)
>          command_loop_1 (keyboard.c:1354)
>           internal_condition_case (eval.c:1365)
>            command_loop_2 (keyboard.c:1095)
>             internal_catch (eval.c:1126)
>              command_loop (keyboard.c:1074)
>               recursive_edit_1 (keyboard.c:718)
>                Frecursive_edit (keyboard.c:790)
>                 main (emacs.c:2080)
>  
> There is a 171MiB's worth of allocations in that path.

Are there chains of calls that are responsible for more memory
allocated than 171MB?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27  8:20                                               ` Eli Zaretskii
@ 2020-11-28  9:00                                                 ` Eli Zaretskii
  2020-11-28 10:45                                                   ` Jean Louis
                                                                     ` (2 more replies)
  0 siblings, 3 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-28  9:00 UTC (permalink / raw)
  To: carlos; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

> Date: Fri, 27 Nov 2020 10:20:46 +0200
> From: Eli Zaretskii <eliz@gnu.org>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com,
>  michael_heerdegen@web.de, trevor@trevorbentley.com
> 
> > > > lisp_align_malloc (alloc.c:1195)
> > > >  Fcons (alloc.c:2694)
> > > >   concat (fns.c:730)
> > > >    Fcopy_sequence (fns.c:598)
> > > >     timer_check (keyboard.c:4395)
> > > >      wait_reading_process_output (process.c:5334)
> > > >       sit_for (dispnew.c:6056)
> > > >        read_char (keyboard.c:2742)
> > > >         read_key_sequence (keyboard.c:9551)
> > > >          command_loop_1 (keyboard.c:1354)
> > > >           internal_condition_case (eval.c:1365)
> > > >            command_loop_2 (keyboard.c:1095)
> > > >             internal_catch (eval.c:1126)
> > > >              command_loop (keyboard.c:1074)
> > > >               recursive_edit_1 (keyboard.c:718)
> > > >                Frecursive_edit (keyboard.c:790)
> > > >                 main (emacs.c:2080)
> > > >  
> > > > There is a 171MiB's worth of allocations in that path.
> > > > 
> > > > There are a lot of traces ending in wait_reading_process_output that
> > > > are consuming 50MiB.
> > > 
> > > Thanks.  If they are like the one above, the allocations are due to
> > > some timer.  Could be jabber, I'll take a look at it.  Or maybe
> > > helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
> > > well.
> > 
> > Oops, I got this mixed up: the timer list is from Jean, but the massif
> > files are from Trevor.
> 
> Double oops: the above just shows that each time we process timers, we
> copy the list of the timers first.  Not sure what to do about that.
> Hmm...  Maybe we should try GC at the end of each timer_check call?

This doesn't seem to be necessary: timer functions are called via
'funcall', whose implementation already includes a call to maybe_gc.

Just to see if we have some problem there, I left an otherwise idle
Emacs with 20 timer functions firing every second run overnight.  It
gained less than 1MB of memory footprint after 10 hours.  So timers
alone cannot explain the dramatic increase in memory footprints
described in this bug report, although they might be a contributing
factor when the Emacs process already has lots of memory allocated to
it.

> Each call to lisp_align_malloc above requests a 1008-byte chunk of
> memory for a new block of Lisp conses.

More accurately, malloc is asked to provide a block of memory whose
size is 1024 bytes minus sizeof (void *).





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-28  9:00                                                 ` Eli Zaretskii
@ 2020-11-28 10:45                                                   ` Jean Louis
  2020-11-28 17:49                                                   ` Trevor Bentley
  2020-12-03  6:30                                                   ` Jean Louis
  2 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-28 10:45 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

Hello,

My good-by function took this time abut 7 minutes with swap being
about 650 MB. Swap was constantly less than 200 MB. Then without me
doing something special, maybe I was idling, swap grew to 650
MB. That is where I invoked the function:

((uptime "8 hours, 56 minutes, 27 seconds") (pid 14637) (garbage ((conses 16 2191203 1613364) (symbols 48 52843 237) (strings 32 301705 122437) (string-bytes 1 9982401) (vectors 16 99828) (vector-slots 8 1856426 1471952) (floats 8 738 5008) (intervals 56 180891 252942) (buffers 984 343))) (buffers-size 38553249) (vsize (vsize 3268444)))

One can see larger vsize of 3.12 G

Largest buffer is PDF of 5394959, the 4322895, 3706662, and so on.

I have tried deleting some buffers with M-x list-buffers:

- few largest buffers I have deleted without problem

- I have tried deleting my Org file with size 966405 and when I
  pressed D nothing was shown on screen, rather hard disk started
  working and it looks by behavior related to memory or swapping

- screen came back and I could press x to delete those buffers.

- even those some deleted buffers were deleted with x, at next click
  on Size in list-buffers I could again find the deleted buffers in
  the list. This is probably unrelated bug. I pressed x again and they
  disappeared. But what if they were not realy delete first time?

I will work little more in this session and will then provide mtrace
for pid 14637.

If anything else to be provided let me know.

Jean






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27  7:52                                             ` Eli Zaretskii
  2020-11-27  8:20                                               ` Eli Zaretskii
@ 2020-11-28 17:31                                               ` Trevor Bentley
  1 sibling, 0 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-11-28 17:31 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:
>>  Thanks.  If they are like the one above, the allocations are 
>> due to some timer.  Could be jabber, I'll take a look at it. 
>> Or maybe helm-ff--cache-mode-refresh, whatever that is; need to 
>> look at Helm as well. 
> 
> Oops, I got this mixed up: the timer list is from Jean, but the 
> massif files are from Trevor. 
> 
> Trevor, can you show the list of timers running on your system? 

I use helm as well, emacs-slack sets a bunch of timers, and I have 
a custom treemacs-based UI for emacs-slack that also refreshes on 
a timer.  A typical timer list looks like this:

(list-timers) 
               0.2s            - thread-list--timer-func 5.0s 
               - undo-auto--boundary-timer 5.1s            - 
               slack-ws-ping 5.1s            - slack-ws-ping 5.1s 
               - slack-ws-ping 5.2s            - slack-ws-ping 
               5.2s            - slack-ws-ping 
              35.6s      1m 0.0s trev/slack--refresh-cache 
   *           0.5s            - #f(compiled-function () 
   #<bytecode 0x1b49fd33ce7c2899> [eldoc-mode global-eldoc-mode 
   eldoc--supported-p (debug error) 
   eldoc-print-current-symbol-info message "eldoc error: %s" nil]) 
   *           0.5s            t #f(compiled-function () 
   #<bytecode 0xbaac23f6e8899> [jit-lock--antiblink-grace-timer 
   jit-lock-context-fontify]) *           0.5s      :repeat 
   blink-cursor-start *           1.0s            - 
   helm-ff--cache-mode-refresh 

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-28  9:00                                                 ` Eli Zaretskii
  2020-11-28 10:45                                                   ` Jean Louis
@ 2020-11-28 17:49                                                   ` Trevor Bentley
  2020-11-30 17:17                                                     ` Trevor Bentley
  2020-12-03  6:30                                                   ` Jean Louis
  2 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-28 17:49 UTC (permalink / raw)
  To: Eli Zaretskii, carlos; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:

> Just to see if we have some problem there, I left an otherwise 
> idle Emacs with 20 timer functions firing every second run 
> overnight.  It gained less than 1MB of memory footprint after 10 
> hours.  So timers alone cannot explain the dramatic increase in 
> memory footprints described in this bug report, although they 
> might be a contributing factor when the Emacs process already 
> has lots of memory allocated to it. 

Something else worth noting is that I have dozens and dozens of 
emacs processes running at all times, and only graphical X11 
clients have had memory explosion.  Plenty of my `emacs -nw` 
instances have been open for 30+ days with heavy use, and all have 
stayed under 100MB RSS.

The most recent instance I ran is a graphical instance that I 
haven't done anything in except scroll around in a single small 
elisp file.  This one has an interesting difference in memory 
usage: the usage is large (2GB heap), but it isn't growing on its 
own.  It seems to grow by 10-20MB every time it gets X11 window 
focus, and other than that it's stable.  If I alt-tab to it 
continuously, I can force its usage up.  It appears to be 
permanent.  This differs from my emacs-slack instances, which 
constantly grow even when backgrounded.

I have yet another graphical instance that I just opened and 
minimized, and never focus.  It's still only using 70MB after over 
a week.  So at least it's not simply leaking all the time... some 
active use has to trigger it.

I'll have an mtrace for you from the current experiment (X11 focus 
leak) tomorrow or Monday.  I hope it's the same issue.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-28 17:49                                                   ` Trevor Bentley
@ 2020-11-30 17:17                                                     ` Trevor Bentley
  2020-11-30 18:15                                                       ` Eli Zaretskii
  2020-12-08 21:50                                                       ` Trevor Bentley
  0 siblings, 2 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-11-30 17:17 UTC (permalink / raw)
  To: Eli Zaretskii, carlos; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen

> I'll have an mtrace for you from the current experiment (X11 
> focus  leak) tomorrow or Monday.  I hope it's the same issue. 

Ok, here is my latest memory log and a matching libmtrace:

https://trevorbentley.com/mtrace3/

This capture is unique in three ways:
 1) Compared to my other tests, this one did not run emacs-slack 
 and did about half of its leaking from X11 focus events, and the 
 other half drifting upwards during idle.  This session has barely 
 done anything.

 2) I added a custom (malloc-trim) command, and called it after 
 making my standard memory log.  At the end of the log, you can 
 see that after the trim memory usage fell from 4GB to 50MB. 
 Unfortunately, this malloc_trim() might make the libmtrace trace 
 harder to make sense of.  But, at least in this case, it meant 
 99% of the memory could be given back to the OS?

 3) I ran the built-in emacs profiler.  The profiler memory 
 results are in the log, both in normal and reversed format, with 
 the largest element expanded.  I don't know how to interpret it, 
 but it looks like maybe a periodic timer started by helm is 
 responsible for 3+GB of RAM?

Also note that the (garbage-collect) call is timed now.  318 
seconds for this one.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-30 17:17                                                     ` Trevor Bentley
@ 2020-11-30 18:15                                                       ` Eli Zaretskii
  2020-11-30 18:33                                                         ` Trevor Bentley
  2020-12-08 21:50                                                       ` Trevor Bentley
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-30 18:15 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support,
>  dj@redhat.com, michael_heerdegen@web.de
> Cc: 
> Date: Mon, 30 Nov 2020 18:17:28 +0100
> 
>  3) I ran the built-in emacs profiler.  The profiler memory 
>  results are in the log

Thanks, but this doesn't really measure memory usage.  It just uses
malloc calls as a poor man's replacement for SIGPROF signal, so the
results show a kind of CPU profile, not memory profile.

>  I don't know how to interpret it, but it looks like maybe a
>  periodic timer started by helm is responsible for 3+GB of RAM?

More like it's responsible for most of the CPU activity.

> Also note that the (garbage-collect) call is timed now.  318 
> seconds for this one.

And the automatic GCs were much faster?

Thanks.  I hope Carlos will be able to give some hints based on your
data.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-30 18:15                                                       ` Eli Zaretskii
@ 2020-11-30 18:33                                                         ` Trevor Bentley
  2020-11-30 19:02                                                           ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-11-30 18:33 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:

>> Also note that the (garbage-collect) call is timed now.  318 
>> seconds for this one. 
> 
> And the automatic GCs were much faster? 
> 

Automatic GCs were unnoticeable, as before.  Still not sure what 
that means.  I think I'll instrument it in C to try to figure out 
what is going on.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-30 18:33                                                         ` Trevor Bentley
@ 2020-11-30 19:02                                                           ` Eli Zaretskii
  2020-11-30 19:17                                                             ` Jean Louis
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-11-30 19:02 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: carlos@redhat.com, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  bugs@gnu.support, dj@redhat.com, michael_heerdegen@web.de
> Cc: 
> Date: Mon, 30 Nov 2020 19:33:38 +0100
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> Also note that the (garbage-collect) call is timed now.  318 
> >> seconds for this one. 
> > 
> > And the automatic GCs were much faster? 
> > 
> 
> Automatic GCs were unnoticeable, as before.  Still not sure what 
> that means.  I think I'll instrument it in C to try to figure out 
> what is going on.

I'm stomped by this discrepancy, and feel that I'm missing something
very basic here...





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-30 19:02                                                           ` Eli Zaretskii
@ 2020-11-30 19:17                                                             ` Jean Louis
  2020-12-01 10:14                                                               ` Trevor Bentley
  2020-12-01 16:00                                                               ` Eli Zaretskii
  0 siblings, 2 replies; 166+ messages in thread
From: Jean Louis @ 2020-11-30 19:17 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, carlos, Trevor Bentley, michael_heerdegen

* Eli Zaretskii <eliz@gnu.org> [2020-11-30 22:10]:
> > From: Trevor Bentley <trevor@trevorbentley.com>
> > Cc: carlos@redhat.com, fweimer@redhat.com, 43389@debbugs.gnu.org,
> >  bugs@gnu.support, dj@redhat.com, michael_heerdegen@web.de
> > Cc: 
> > Date: Mon, 30 Nov 2020 19:33:38 +0100
> > 
> > Eli Zaretskii <eliz@gnu.org> writes:
> > 
> > >> Also note that the (garbage-collect) call is timed now.  318 
> > >> seconds for this one. 
> > > 
> > > And the automatic GCs were much faster? 
> > > 
> > 
> > Automatic GCs were unnoticeable, as before.  Still not sure what 
> > that means.  I think I'll instrument it in C to try to figure out 
> > what is going on.
> 
> I'm stomped by this discrepancy, and feel that I'm missing something
> very basic here...

This issue on helm is closed but looks very similar to what is
happening here and could maybe give related information:

https://github.com/helm/helm/issues/3121

Other issues related to memory leak at helm:
https://github.com/helm/helm/issues?q=memory+leak





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-30 19:17                                                             ` Jean Louis
@ 2020-12-01 10:14                                                               ` Trevor Bentley
  2020-12-01 10:33                                                                 ` Jean Louis
  2020-12-01 16:00                                                               ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-01 10:14 UTC (permalink / raw)
  To: Jean Louis, Eli Zaretskii; +Cc: fweimer, 43389, dj, michael_heerdegen, carlos

Jean Louis <bugs@gnu.support> writes:
> 
> This issue on helm is closed but looks very similar to what is 
> happening here and could maybe give related information: 
> 
> https://github.com/helm/helm/issues/3121 
> 
> Other issues related to memory leak at helm: 
> https://github.com/helm/helm/issues?q=memory+leak 

This is a different "helm" project, unrelated to emacs as far as I 
can tell.  The emacs helm is here: 
https://github.com/emacs-helm/helm

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-01 10:14                                                               ` Trevor Bentley
@ 2020-12-01 10:33                                                                 ` Jean Louis
  0 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-12-01 10:33 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, dj, michael_heerdegen, carlos

* Trevor Bentley <trevor@trevorbentley.com> [2020-12-01 13:15]:
> Jean Louis <bugs@gnu.support> writes:
> > 
> > This issue on helm is closed but looks very similar to what is happening
> > here and could maybe give related information:
> > 
> > https://github.com/helm/helm/issues/3121
> > 
> > Other issues related to memory leak at helm:
> > https://github.com/helm/helm/issues?q=memory+leak
> 
> This is a different "helm" project, unrelated to emacs as far as I can tell.
> The emacs helm is here: https://github.com/emacs-helm/helm

Ohhh ÷)





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-30 19:17                                                             ` Jean Louis
  2020-12-01 10:14                                                               ` Trevor Bentley
@ 2020-12-01 16:00                                                               ` Eli Zaretskii
  2020-12-01 16:14                                                                 ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  1 sibling, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-01 16:00 UTC (permalink / raw)
  To: Jean Louis; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

> Date: Mon, 30 Nov 2020 22:17:09 +0300
> From: Jean Louis <bugs@gnu.support>
> Cc: Trevor Bentley <trevor@trevorbentley.com>, fweimer@redhat.com,
>   43389@debbugs.gnu.org, dj@redhat.com, carlos@redhat.com,
>   michael_heerdegen@web.de
> 
> This issue on helm is closed but looks very similar to what is
> happening here and could maybe give related information:
> 
> https://github.com/helm/helm/issues/3121
> 
> Other issues related to memory leak at helm:
> https://github.com/helm/helm/issues?q=memory+leak

Are these at all relevant? they are not about Emacs, AFAIU.  There are
many ways to have a leak and run out of memory, most of them unrelated
to what happens in our case.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-01 16:00                                                               ` Eli Zaretskii
@ 2020-12-01 16:14                                                                 ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
  0 siblings, 0 replies; 166+ messages in thread
From: Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors @ 2020-12-01 16:14 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, Jean Louis, dj, carlos, trevor, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:

>> Date: Mon, 30 Nov 2020 22:17:09 +0300
>> From: Jean Louis <bugs@gnu.support>
>> Cc: Trevor Bentley <trevor@trevorbentley.com>, fweimer@redhat.com,
>>   43389@debbugs.gnu.org, dj@redhat.com, carlos@redhat.com,
>>   michael_heerdegen@web.de
>> 
>> This issue on helm is closed but looks very similar to what is
>> happening here and could maybe give related information:
>> 
>> https://github.com/helm/helm/issues/3121
>> 
>> Other issues related to memory leak at helm:
>> https://github.com/helm/helm/issues?q=memory+leak
>
> Are these at all relevant? they are not about Emacs, AFAIU.  There are
> many ways to have a leak and run out of memory, most of them unrelated
> to what happens in our case.

That's another helm "The package manager for Kubernetes", not the Elisp
package.

  Andrea





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-28  9:00                                                 ` Eli Zaretskii
  2020-11-28 10:45                                                   ` Jean Louis
  2020-11-28 17:49                                                   ` Trevor Bentley
@ 2020-12-03  6:30                                                   ` Jean Louis
  2 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-12-03  6:30 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, carlos, trevor, michael_heerdegen

I have finished one Emacs session over 2 days and 11 hours with some
differences in my behavior, and I have not observed no problem with
Emacs swapping hard or having memory problem that impacts my work. I
have not upgraded from git as well.

- while I did use helm mode in the sense to directly invoke it, I did
  not turn it on with helm-mode but some functions used helm
  indirectly. This is because it was said that helm could be
  problem. Now without using helm I did not encounter problem in by
  average longer time than before when I did encounter it.

- I have not used helm to install packages `helm-system-packages' what
  I often do

- my state for input-method before 1.5 days could not be switched back
  any more. C-\ did not work. Anything I would do the input method
  remained. This may or may not be related. To me it looks apparently
  related.

- symon-mode could not be turned off any more. It would say it is
  turned off but it was not. I think it runs with timer and something
  happened. It also looks related to this problem just by feeling. It
  may not be.

Because of not being able to change input method back to normal I have
to restart session.

I have sent one mtrace, there is no report, so I am not sending the
previous 2 mtraces which had the memory problem and swapping, that I
had to kill emacs. Once it becomes needed, I can send it.

I have mtrace for this session and I will send it when somebody tells
me it is needed.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-30 17:17                                                     ` Trevor Bentley
  2020-11-30 18:15                                                       ` Eli Zaretskii
@ 2020-12-08 21:50                                                       ` Trevor Bentley
  2020-12-08 22:12                                                         ` Carlos O'Donell
  2020-12-10 18:45                                                         ` Eli Zaretskii
  1 sibling, 2 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-12-08 21:50 UTC (permalink / raw)
  To: Eli Zaretskii, carlos; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen

Trevor Bentley <trevor@trevorbentley.com> writes:

I'm back with 5 mtraces:

https://trevorbentley.com/mtrace/

Keep in mind that these things compress well, so the largest one 
is on the order of 45GB when decompressed.

These are from various emacs instances, some running the 
emacs-slack package and others just editing elisp code.  All 
inflated to several gigabytes of heap over 1-4 days.

Log files similar to the ones I've been posting in this thread are 
in the archives.  I don't think there's any point of including 
them here anymore, as they're all about the same.

I've been too busy to modify emacs to print garbage collects, but 
these still show really long (garbage-collect) calls, often 
exceeding 15 minutes.

Last thing: I've had one unused (graphical) emacs session running 
for 16 days now, minimized.  It's still at 57MB RSS.  I can 
definitively say that the leak doesn't occur unless emacs is 
actively used, for all the good that does us.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-08 21:50                                                       ` Trevor Bentley
@ 2020-12-08 22:12                                                         ` Carlos O'Donell
  2020-12-10 18:45                                                         ` Eli Zaretskii
  1 sibling, 0 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-12-08 22:12 UTC (permalink / raw)
  To: Trevor Bentley, Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen

On 12/8/20 4:50 PM, Trevor Bentley wrote:
> Trevor Bentley <trevor@trevorbentley.com> writes:
> 
> I'm back with 5 mtraces:
> 
> https://trevorbentley.com/mtrace/
> 
> Keep in mind that these things compress well, so the largest one is on the order of 45GB when decompressed.
> 
> These are from various emacs instances, some running the emacs-slack package and others just editing elisp code.  All inflated to several gigabytes of heap over 1-4 days.
> 
> Log files similar to the ones I've been posting in this thread are in the archives.  I don't think there's any point of including them here anymore, as they're all about the same.
> 
> I've been too busy to modify emacs to print garbage collects, but these still show really long (garbage-collect) calls, often exceeding 15 minutes.
> 
> Last thing: I've had one unused (graphical) emacs session running for 16 days now, minimized.  It's still at 57MB RSS.  I can definitively say that the leak doesn't occur unless emacs is actively used, for all the good that does us.

I'm fetching this trace for analysis:
https://trevorbentley.com/mtrace/mtrace9.tar.bz2

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-11-27 15:33                                           ` Eli Zaretskii
@ 2020-12-08 22:15                                             ` Carlos O'Donell
  0 siblings, 0 replies; 166+ messages in thread
From: Carlos O'Donell @ 2020-12-08 22:15 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, michael_heerdegen, trevor

On 11/27/20 10:33 AM, Eli Zaretskii wrote:
>> Cc: trevor@trevorbentley.com, bugs@gnu.support, fweimer@redhat.com,
>>  43389@debbugs.gnu.org, dj@redhat.com, michael_heerdegen@web.de
>> From: Carlos O'Donell <carlos@redhat.com>
>> Date: Fri, 27 Nov 2020 00:04:56 -0500
>>
>> lisp_align_malloc (alloc.c:1195)
>>  Fcons (alloc.c:2694)
>>   concat (fns.c:730)
>>    Fcopy_sequence (fns.c:598)
>>     timer_check (keyboard.c:4395)
>>      wait_reading_process_output (process.c:5334)
>>       sit_for (dispnew.c:6056)
>>        read_char (keyboard.c:2742)
>>         read_key_sequence (keyboard.c:9551)
>>          command_loop_1 (keyboard.c:1354)
>>           internal_condition_case (eval.c:1365)
>>            command_loop_2 (keyboard.c:1095)
>>             internal_catch (eval.c:1126)
>>              command_loop (keyboard.c:1074)
>>               recursive_edit_1 (keyboard.c:718)
>>                Frecursive_edit (keyboard.c:790)
>>                 main (emacs.c:2080)
>>  
>> There is a 171MiB's worth of allocations in that path.
> 
> Are there chains of calls that are responsible for more memory
> allocated than 171MB?
 
Yes, you can view them all yourself, just fetch the massif data
and use massif-visualizer to view the data:

http://trevorbentley.com/massif.out.3364630

-- 
Cheers,
Carlos.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-08 21:50                                                       ` Trevor Bentley
  2020-12-08 22:12                                                         ` Carlos O'Donell
@ 2020-12-10 18:45                                                         ` Eli Zaretskii
  2020-12-10 19:21                                                           ` Stefan Monnier
                                                                             ` (2 more replies)
  1 sibling, 3 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-10 18:45 UTC (permalink / raw)
  To: Trevor Bentley, Stefan Monnier
  Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Stefan, please help with this complex issue (or maybe several
issues).  We have collected some evidence in this bug report, but I
don't yet see where is this going, or how to make any real progress
here.

One thing that I cannot explain is this:

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support,
>  dj@redhat.com, michael_heerdegen@web.de
> Cc: 
> Date: Tue, 08 Dec 2020 22:50:37 +0100
> 
> I've been too busy to modify emacs to print garbage collects, but 
> these still show really long (garbage-collect) calls, often 
> exceeding 15 minutes.

Trevor reported several times that automatic GC is fast as usual, but
manual invocations of "M-x garbage-collect" take much longer, many
minutes.  I don't understand how this could happen, because both
methods of invoking GC do exactly the same job.

I thought about possible ways of explaining the stark differences in
the time it takes to GC, and came up with these:

 . The depth of the run-time (C-level) stack.  If this is much deeper
   in one of the cases, it could explain the longer time.  But in that
   case, I'd expect the automatic GC to take longer, because typically
   the C stack is relatively shallow when Emacs is idle than when it
   runs some Lisp.  This contradicts Trevor's observations.

 . Some difference in buffers and strings, which causes the manual GC
   to relocate and compact a lot of them.  But again: (a) why the
   automatic GC never hits the same condition, and (b) I can explain
   the reverse easier, i.e. that lots of temporary strings and buffers
   exist while Lisp runs, but not when Emacs is idle.

Any other ideas?  Any data Trevor could provide, e.g. by attaching a
debugger during these prolonged GC, and telling us something
interesting?

TIA





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 18:45                                                         ` Eli Zaretskii
@ 2020-12-10 19:21                                                           ` Stefan Monnier
  2020-12-10 19:33                                                             ` Trevor Bentley
                                                                               ` (3 more replies)
  2020-12-10 20:24                                                           ` Jean Louis
  2020-12-12  1:28                                                           ` Jean Louis
  2 siblings, 4 replies; 166+ messages in thread
From: Stefan Monnier @ 2020-12-10 19:21 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, bugs, dj, carlos, Trevor Bentley,
	michael_heerdegen

> Trevor reported several times that automatic GC is fast as usual, but
> manual invocations of "M-x garbage-collect" take much longer, many
> minutes.  I don't understand how this could happen, because both
> methods of invoking GC do exactly the same job.

Indeed, that makes no sense.  The only thing that comes to mind is that
when they do `M-x garbage-collect` the 15 minutes aren't actually spent
in the GC but in some pre/post command hook or something like that
(e.g. in `execute-extended-command--shorter`)?

Do we have a `profiler-report` available for those 15 minutes?
I've taken a quick look at the massive threads in that bug report,
but haven't had the time to read in detail.  AFAICT we don't have a
profiler output for those 15minutes, so it would be good to try:

    M-x profiler-start RET RET
    M-x garbage-collect RET     ;; This should presumably take several minutes
    M-x profiler-report RET

and then shows us this report (using C-u RET on the top-level elements
to unfold them).


        Stefan






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 19:21                                                           ` Stefan Monnier
@ 2020-12-10 19:33                                                             ` Trevor Bentley
  2020-12-10 19:47                                                               ` Stefan Monnier
  2020-12-10 20:26                                                             ` Jean Louis
                                                                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-10 19:33 UTC (permalink / raw)
  To: Stefan Monnier, Eli Zaretskii
  Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Stefan Monnier <monnier@iro.umontreal.ca> writes:
 
> Do we have a `profiler-report` available for those 15 minutes? 
> I've taken a quick look at the massive threads in that bug 
> report, but haven't had the time to read in detail.  AFAICT we 
> don't have a profiler output for those 15minutes, so it would be 
> good to try: 
> 
>     M-x profiler-start RET RET M-x garbage-collect RET     ;; 
>     This should presumably take several minutes M-x 
>     profiler-report RET 
> 
> and then shows us this report (using C-u RET on the top-level 
> elements to unfold them). 

I made one a profiler report for a complete 1-2 day session (see 
the e-mail referencing "mtrace3"), but none for just garbage 
collection.  I'll do that for the next one.

Is there any easy way to check if any of my packages are adding 
extra hooks around garbage-collect?  I can't imagine why they 
would, but you never know.

Thanks

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 19:33                                                             ` Trevor Bentley
@ 2020-12-10 19:47                                                               ` Stefan Monnier
  0 siblings, 0 replies; 166+ messages in thread
From: Stefan Monnier @ 2020-12-10 19:47 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

> Is there any easy way to check if any of my packages are adding extra hooks
> around garbage-collect?  I can't imagine why they would, but you never know.

I think there can be so many hooks involved that the profiler is the
only good way to figure that out.


        Stefan






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 18:45                                                         ` Eli Zaretskii
  2020-12-10 19:21                                                           ` Stefan Monnier
@ 2020-12-10 20:24                                                           ` Jean Louis
  2020-12-12  1:28                                                           ` Jean Louis
  2 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-12-10 20:24 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, carlos, Trevor Bentley, michael_heerdegen,
	Stefan Monnier

* Eli Zaretskii <eliz@gnu.org> [2020-12-10 21:47]:
> Trevor reported several times that automatic GC is fast as usual, but
> manual invocations of "M-x garbage-collect" take much longer, many
> minutes.  I don't understand how this could happen, because both
> methods of invoking GC do exactly the same job.

Sometimes 30-36 minutes.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 19:21                                                           ` Stefan Monnier
  2020-12-10 19:33                                                             ` Trevor Bentley
@ 2020-12-10 20:26                                                             ` Jean Louis
  2020-12-10 20:30                                                             ` Jean Louis
  2020-12-12 11:20                                                             ` Trevor Bentley
  3 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-12-10 20:26 UTC (permalink / raw)
  To: Stefan Monnier
  Cc: fweimer, 43389, dj, carlos, Trevor Bentley, michael_heerdegen

* Stefan Monnier <monnier@iro.umontreal.ca> [2020-12-10 22:21]:
>     M-x profiler-start RET RET
>     M-x garbage-collect RET     ;; This should presumably take several minutes
>     M-x profiler-report RET

I will try with function doing all three together.






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 19:21                                                           ` Stefan Monnier
  2020-12-10 19:33                                                             ` Trevor Bentley
  2020-12-10 20:26                                                             ` Jean Louis
@ 2020-12-10 20:30                                                             ` Jean Louis
  2020-12-12 11:20                                                             ` Trevor Bentley
  3 siblings, 0 replies; 166+ messages in thread
From: Jean Louis @ 2020-12-10 20:30 UTC (permalink / raw)
  To: Stefan Monnier
  Cc: fweimer, 43389, dj, carlos, Trevor Bentley, michael_heerdegen

* Stefan Monnier <monnier@iro.umontreal.ca> [2020-12-10 22:21]:
> > Trevor reported several times that automatic GC is fast as usual, but
> > manual invocations of "M-x garbage-collect" take much longer, many
> > minutes.  I don't understand how this could happen, because both
> > methods of invoking GC do exactly the same job.
> 
> Indeed, that makes no sense.  The only thing that comes to mind is that
> when they do `M-x garbage-collect` the 15 minutes aren't actually spent
> in the GC but in some pre/post command hook or something like that
> (e.g. in `execute-extended-command--shorter`)?
> 
> Do we have a `profiler-report` available for those 15 minutes?
> I've taken a quick look at the massive threads in that bug report,
> but haven't had the time to read in detail.  AFAICT we don't have a
> profiler output for those 15minutes, so it would be good to try:
> 
>     M-x profiler-start RET RET
>     M-x garbage-collect RET     ;; This should presumably take several minutes
>     M-x profiler-report RET

Another issue is that since I use LD_PRELOAD with gmalloc trace is
that I have not encountered high swapping and Emacs being totally
unusable. And I have not upgraded Emacs. Changed basically nothing but
using the mtrace.

What I can still observe is that vsize grows high as usual. But I have
not observed swap growing high or that hard disk starts working to
find some swap memory for 40 minutes or longer indefinitely maybe.







^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 18:45                                                         ` Eli Zaretskii
  2020-12-10 19:21                                                           ` Stefan Monnier
  2020-12-10 20:24                                                           ` Jean Louis
@ 2020-12-12  1:28                                                           ` Jean Louis
  2020-12-12  8:49                                                             ` Andreas Schwab
  2 siblings, 1 reply; 166+ messages in thread
From: Jean Louis @ 2020-12-12  1:28 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, bugs, carlos, Trevor Bentley,
	michael_heerdegen, Stefan Monnier

* Eli Zaretskii <eliz@gnu.org> [2020-12-10 21:46]:
> Stefan, please help with this complex issue (or maybe several
> issues).  We have collected some evidence in this bug report, but I
> don't yet see where is this going, or how to make any real progress
> here.
> 
> One thing that I cannot explain is this:
> 
> > From: Trevor Bentley <trevor@trevorbentley.com>
> > Cc: fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support,
> >  dj@redhat.com, michael_heerdegen@web.de
> > Cc: 
> > Date: Tue, 08 Dec 2020 22:50:37 +0100
> > 
> > I've been too busy to modify emacs to print garbage collects, but 
> > these still show really long (garbage-collect) calls, often 
> > exceeding 15 minutes.
> 
> Trevor reported several times that automatic GC is fast as usual, but
> manual invocations of "M-x garbage-collect" take much longer, many
> minutes.  I don't understand how this could happen, because both
> methods of invoking GC do exactly the same job.

My observation over time is that that running M-x garbage-collect
created the same effect just as when I observed that Emacs starts
doing something with hard disk and continues so for unpredicted number
of minutes. Normally so long until I kill it. It could be 10-20
minutes that I have waited. So that could be where the problem lies.

Something happens inside of Emacs, automatic garbage-collect is
invoked which cannot soon finish its job.

About 2 times I invoked garbage-collect manually and caused about
visually same behavior to take place. I hope you understand this
explanation.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12  1:28                                                           ` Jean Louis
@ 2020-12-12  8:49                                                             ` Andreas Schwab
  0 siblings, 0 replies; 166+ messages in thread
From: Andreas Schwab @ 2020-12-12  8:49 UTC (permalink / raw)
  To: Jean Louis
  Cc: fweimer, 43389, dj, carlos, Trevor Bentley, michael_heerdegen,
	Stefan Monnier

On Dez 12 2020, Jean Louis wrote:

> My observation over time is that that running M-x garbage-collect
> created the same effect just as when I observed that Emacs starts
> doing something with hard disk and continues so for unpredicted number
> of minutes.

This is totally expected.  When you are tight on memory, rummaging
through all of it can only make things worse.

Andreas.

-- 
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-10 19:21                                                           ` Stefan Monnier
                                                                               ` (2 preceding siblings ...)
  2020-12-10 20:30                                                             ` Jean Louis
@ 2020-12-12 11:20                                                             ` Trevor Bentley
  2020-12-12 11:40                                                               ` Eli Zaretskii
  3 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-12 11:20 UTC (permalink / raw)
  To: Stefan Monnier, Eli Zaretskii
  Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Stefan Monnier <monnier@iro.umontreal.ca> writes:

> Do we have a `profiler-report` available for those 15 minutes? 
> I've taken a quick look at the massive threads in that bug 
> report, but haven't had the time to read in detail.  AFAICT we 
> don't have a profiler output for those 15minutes, so it would be 
> good to try: 
> 
>     M-x profiler-start RET RET M-x garbage-collect RET     ;; 
>     This should presumably take several minutes M-x 
>     profiler-report RET 
> 
> and then shows us this report (using C-u RET on the top-level 
> elements to unfold them). 

I'm back with a new mtrace, a profile of the long garbage-collect, 
and a new discovery.

First of all, the 26GB mtrace of a session that exploded to over 
8GB is available in mtrace12.tar.bz2 here:

https://trevorbentley.com/mtrace/

The summary log is in mtrace12_log.txt in the same directory, 
including output of profiler-report for only the duration of the 
garbage-collect, which took a record 50 minutes to complete.

As you can see in the profiler log, it is, in fact, the C 
garbage_collect() function eating all of the time:

----
;;(profiler-report) - ... 
901307  99% 
   Automatic GC 
   901281  99% 
 + trev/slack--refresh-cache 
 19  0%
----

Not only that, but I added printfs in emacs itself around the 
garbage_collect() and gc_sweep() functions.  Each line prints the 
unix timestamp when it began, and the 'end' lines print the 
duration since the start.  You can see that the entire 50 minutes 
was spent in gc_sweep():

----
1607695679: garbage_collect start 1607695680: gc_sweep start 
1607695680: gc_sweep end (0 s) 1607695680: garbage_collect #1085 
end (1 s) 1607695761: garbage_collect start 1607695762: gc_sweep 
start 1607695762: gc_sweep end (0 s) 1607726912: garbage_collect 
start 1607726913: gc_sweep start 1607729921: gc_sweep end (3008 s) 
1607729922: garbage_collect #1086 end (3010 s)
----

And finally, here's what I find very suspicious: it was nearly 9 
hours since the last garbage collect ran (1607726912 - 
1607695762).  This is an instance that I used all day long, 
flittering back and forth between it and other work.  It had both 
tons of interactive use, and tons of idle time.  I don't think 9 
hours between garbage collects sounds right.

The last garbage collect before the long manual one also never 
printed an end message, which is confusing.  I see no early 
returns in garbage_collect()... is there some macro that can 
trigger a return, or maybe something uses longjmp?

Thanks,

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 11:20                                                             ` Trevor Bentley
@ 2020-12-12 11:40                                                               ` Eli Zaretskii
  2020-12-12 19:14                                                                 ` Stefan Monnier
  2020-12-12 22:16                                                                 ` Michael Heerdegen
  0 siblings, 2 replies; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-12 11:40 UTC (permalink / raw)
  To: Trevor Bentley
  Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen, monnier

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: carlos@redhat.com, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  bugs@gnu.support, dj@redhat.com, michael_heerdegen@web.de
> Cc: 
> Date: Sat, 12 Dec 2020 12:20:57 +0100
> 
> Not only that, but I added printfs in emacs itself around the 
> garbage_collect() and gc_sweep() functions.  Each line prints the 
> unix timestamp when it began, and the 'end' lines print the 
> duration since the start.  You can see that the entire 50 minutes 
> was spent in gc_sweep():

I think this is expected if you have a lot of objects to sweep.

> And finally, here's what I find very suspicious: it was nearly 9 
> hours since the last garbage collect ran (1607726912 - 
> 1607695762).  This is an instance that I used all day long, 
> flittering back and forth between it and other work.  It had both 
> tons of interactive use, and tons of idle time.  I don't think 9 
> hours between garbage collects sounds right.

It isn't.  So it is now important to find out why this happens.  Could
it be that some of your packages plays with the value of GC threshold?

> The last garbage collect before the long manual one also never 
> printed an end message, which is confusing.  I see no early 
> returns in garbage_collect()... is there some macro that can 
> trigger a return, or maybe something uses longjmp?

Not that I know of, no.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 11:40                                                               ` Eli Zaretskii
@ 2020-12-12 19:14                                                                 ` Stefan Monnier
  2020-12-12 19:20                                                                   ` Eli Zaretskii
  2020-12-12 22:16                                                                 ` Michael Heerdegen
  1 sibling, 1 reply; 166+ messages in thread
From: Stefan Monnier @ 2020-12-12 19:14 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, bugs, dj, carlos, Trevor Bentley,
	michael_heerdegen

>> Not only that, but I added printfs in emacs itself around the 
>> garbage_collect() and gc_sweep() functions.  Each line prints the 
>> unix timestamp when it began, and the 'end' lines print the 
>> duration since the start.  You can see that the entire 50 minutes 
>> was spent in gc_sweep():
>
> I think this is expected if you have a lot of objects to sweep.

Actually, I'm surprised most of the time is spent in gc_sweep:
mark_object is usually where most of the time is spent, so this suggests
that the total heap size is *much* larger than the amount of live objects.


        Stefan






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 19:14                                                                 ` Stefan Monnier
@ 2020-12-12 19:20                                                                   ` Eli Zaretskii
  2020-12-12 19:46                                                                     ` Stefan Monnier
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-12 19:20 UTC (permalink / raw)
  To: Stefan Monnier
  Cc: fweimer, 43389, bugs, dj, carlos, trevor, michael_heerdegen

> From: Stefan Monnier <monnier@iro.umontreal.ca>
> Cc: Trevor Bentley <trevor@trevorbentley.com>,  carlos@redhat.com,
>   fweimer@redhat.com,  43389@debbugs.gnu.org,  bugs@gnu.support,
>   dj@redhat.com,  michael_heerdegen@web.de
> Date: Sat, 12 Dec 2020 14:14:39 -0500
> 
> >> Not only that, but I added printfs in emacs itself around the 
> >> garbage_collect() and gc_sweep() functions.  Each line prints the 
> >> unix timestamp when it began, and the 'end' lines print the 
> >> duration since the start.  You can see that the entire 50 minutes 
> >> was spent in gc_sweep():
> >
> > I think this is expected if you have a lot of objects to sweep.
> 
> Actually, I'm surprised most of the time is spent in gc_sweep:
> mark_object is usually where most of the time is spent, so this suggests
> that the total heap size is *much* larger than the amount of live objects.

Sure.  But isn't that the same as what I said, just from another POV?
"A lot of objects to sweep" means there are many objects that aren't
live and need to have their memory freed.

Since GC wasn't run for many hours, having a lot of garbage to collect
is expected, right?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 19:20                                                                   ` Eli Zaretskii
@ 2020-12-12 19:46                                                                     ` Stefan Monnier
  2020-12-12 19:51                                                                       ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Stefan Monnier @ 2020-12-12 19:46 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, bugs, dj, carlos, trevor, michael_heerdegen

> Sure.  But isn't that the same as what I said, just from another POV?
> "A lot of objects to sweep" means there are many objects that aren't
> live and need to have their memory freed.
>
> Since GC wasn't run for many hours, having a lot of garbage to collect
> is expected, right?

Could be, but for tens of minutes?

AFAIK gc_sweep shouldn't cause too much thrashing either (the sweep is
a mostly sequential scan of memory, so even if the total heap is larger
than your total RAM, it should be ~O(total heap size / bandwidth from
swap partition)), so I can't imagine how we could spend tens of minutes
doing gc_sweep (or maybe the time is spend in gc_sweep but doing
something else than the sweep itself, e.g. handling weak pointers, or
removing dead markers from marker lists, ... still seems hard to
imagine spending tens of minutes, tho).


        Stefan






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 19:46                                                                     ` Stefan Monnier
@ 2020-12-12 19:51                                                                       ` Eli Zaretskii
  2020-12-12 20:14                                                                         ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-12 19:51 UTC (permalink / raw)
  To: Stefan Monnier
  Cc: fweimer, 43389, bugs, dj, carlos, trevor, michael_heerdegen

> From: Stefan Monnier <monnier@iro.umontreal.ca>
> Cc: trevor@trevorbentley.com,  carlos@redhat.com,  fweimer@redhat.com,
>   43389@debbugs.gnu.org,  bugs@gnu.support,  dj@redhat.com,
>   michael_heerdegen@web.de
> Date: Sat, 12 Dec 2020 14:46:20 -0500
> 
> > Sure.  But isn't that the same as what I said, just from another POV?
> > "A lot of objects to sweep" means there are many objects that aren't
> > live and need to have their memory freed.
> >
> > Since GC wasn't run for many hours, having a lot of garbage to collect
> > is expected, right?
> 
> Could be, but for tens of minutes?

If the system is paging, it could take that long, yes.

> AFAIK gc_sweep shouldn't cause too much thrashing either (the sweep is
> a mostly sequential scan of memory, so even if the total heap is larger
> than your total RAM, it should be ~O(total heap size / bandwidth from
> swap partition)), so I can't imagine how we could spend tens of minutes
> doing gc_sweep (or maybe the time is spend in gc_sweep but doing
> something else than the sweep itself, e.g. handling weak pointers, or
> removing dead markers from marker lists, ... still seems hard to
> imagine spending tens of minutes, tho).

Does gc_sweep involve touching all the memory we free?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 19:51                                                                       ` Eli Zaretskii
@ 2020-12-12 20:14                                                                         ` Trevor Bentley
  0 siblings, 0 replies; 166+ messages in thread
From: Trevor Bentley @ 2020-12-12 20:14 UTC (permalink / raw)
  To: Eli Zaretskii, Stefan Monnier
  Cc: fweimer, 43389, bugs, dj, carlos, michael_heerdegen

Eli Zaretskii <eliz@gnu.org> writes:

>> Could be, but for tens of minutes? 
> 
> If the system is paging, it could take that long, yes. 
> 
>> AFAIK gc_sweep shouldn't cause too much thrashing either (the 
>> sweep is a mostly sequential scan of memory, so even if the 
>> total heap is larger than your total RAM, it should be ~O(total 
>> heap size / bandwidth from 

In my particular case, I have plenty of free memory.  I assume 
nothing is paging to disk in any of my reports, though I haven't 
thought to explicitly check.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 11:40                                                               ` Eli Zaretskii
  2020-12-12 19:14                                                                 ` Stefan Monnier
@ 2020-12-12 22:16                                                                 ` Michael Heerdegen
  2020-12-13  3:34                                                                   ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Michael Heerdegen @ 2020-12-12 22:16 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: fweimer, 43389, dj, bugs, carlos, Trevor Bentley, monnier

Eli Zaretskii <eliz@gnu.org> writes:

> Could it be that some of your packages plays with the value of GC
> threshold?

Dunno if it matters, but `gnus-registry-save' binds it temporarily to a
high value, and I once had experienced memory grow largely while using
Gnus.

Michael.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-12 22:16                                                                 ` Michael Heerdegen
@ 2020-12-13  3:34                                                                   ` Eli Zaretskii
  2020-12-13 10:20                                                                     ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-13  3:34 UTC (permalink / raw)
  To: Michael Heerdegen; +Cc: fweimer, 43389, dj, bugs, carlos, trevor, monnier

> From: Michael Heerdegen <michael_heerdegen@web.de>
> Cc: Trevor Bentley <trevor@trevorbentley.com>,  monnier@iro.umontreal.ca,
>   carlos@redhat.com,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>   bugs@gnu.support,  dj@redhat.com
> Date: Sat, 12 Dec 2020 23:16:46 +0100
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > Could it be that some of your packages plays with the value of GC
> > threshold?
> 
> Dunno if it matters, but `gnus-registry-save' binds it temporarily to a
> high value

I'd prefer very much that our core code never did that.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13  3:34                                                                   ` Eli Zaretskii
@ 2020-12-13 10:20                                                                     ` Trevor Bentley
  2020-12-13 15:30                                                                       ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-13 10:20 UTC (permalink / raw)
  To: Eli Zaretskii, Michael Heerdegen
  Cc: fweimer, 43389, bugs, dj, carlos, monnier

>> Dunno if it matters, but `gnus-registry-save' binds it 
>> temporarily to a high value 
> 
> I'd prefer very much that our core code never did that. 

I'm not sure what that is, but I'm not calling it directly, and 
probably not indirectly either.  Not doing any mail reading in the 
instances that are inflating.

I print the gc variables in each of my log analyses, and they have 
always been the same: the default.

I have one instance running that has clearly hit the problem. 
garbage_collect() never printed its "end" message, and there have 
been no further garbage collects in nearly 20 hours:

----
1607783297: garbage_collect start 1607783297: gc_sweep start 
1607783297: gc_sweep end (0 s) ----

Right now, I'm leaning towards this being the root cause. 
Something is causing a garbage collect to crash or hang or 
otherwise exit in some unknown way, and automatic garbage 
collection gets disabled until I manually retrigger it.

Garbage collect never runs on other threads/forks, right?  If it 
were hung forever inside garbage_collect(), I would expect the 
whole window to be frozen, but it is not.

I'll add more printfs in garbage_collect() and try to figure out 
where it is exiting.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13 10:20                                                                     ` Trevor Bentley
@ 2020-12-13 15:30                                                                       ` Eli Zaretskii
  2020-12-13 19:34                                                                         ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-13 15:30 UTC (permalink / raw)
  To: Trevor Bentley
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: monnier@iro.umontreal.ca, carlos@redhat.com, fweimer@redhat.com,
>  43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 11:20:32 +0100
> 
> I have one instance running that has clearly hit the problem. 
> garbage_collect() never printed its "end" message, and there have 
> been no further garbage collects in nearly 20 hours:
> 
> ----
> 1607783297: garbage_collect start 1607783297: gc_sweep start 
> 1607783297: gc_sweep end (0 s) ----
> 
> Right now, I'm leaning towards this being the root cause. 
> Something is causing a garbage collect to crash or hang or 
> otherwise exit in some unknown way, and automatic garbage 
> collection gets disabled until I manually retrigger it.
> 
> Garbage collect never runs on other threads/forks, right?

If you use packages or commands that create Lisp threads, I think GC
can run from any of these Lisp threads.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13 15:30                                                                       ` Eli Zaretskii
@ 2020-12-13 19:34                                                                         ` Trevor Bentley
  2020-12-13 19:38                                                                           ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-13 19:34 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

Eli Zaretskii <eliz@gnu.org> writes:

>> Garbage collect never runs on other threads/forks, right? 
> 
> If you use packages or commands that create Lisp threads, I 
> think GC can run from any of these Lisp threads. 

Hmm, that makes it trickier.  No clue if my default packages 
launch threads, but it's possible.

I just hit the bug in one of my sessions: the call to 
unblock_input() in garbage_collect() never returns.  But the 
session still completely works, so I'm not really sure what's 
going on here.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13 19:34                                                                         ` Trevor Bentley
@ 2020-12-13 19:38                                                                           ` Eli Zaretskii
  2020-12-13 19:59                                                                             ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-13 19:38 UTC (permalink / raw)
  To: Trevor Bentley
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: michael_heerdegen@web.de, monnier@iro.umontreal.ca, carlos@redhat.com,
>  fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 20:34:11 +0100
> 
> >> Garbage collect never runs on other threads/forks, right? 
> > 
> > If you use packages or commands that create Lisp threads, I 
> > think GC can run from any of these Lisp threads. 
> 
> Hmm, that makes it trickier.  No clue if my default packages 
> launch threads, but it's possible.

Grep them for make-thread.

> I just hit the bug in one of my sessions: the call to 
> unblock_input() in garbage_collect() never returns.

If that ran in a thread, perhaps the thread died.

> But the session still completely works, so I'm not really sure
> what's going on here.

As long as the main thread runs, you might indeed see nothing special.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13 19:38                                                                           ` Eli Zaretskii
@ 2020-12-13 19:59                                                                             ` Trevor Bentley
  2020-12-13 20:21                                                                               ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-13 19:59 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

>> Hmm, that makes it trickier.  No clue if my default packages 
>> launch threads, but it's possible. 
> 
> Grep them for make-thread. 
> 
>> I just hit the bug in one of my sessions: the call to 
>> unblock_input() in garbage_collect() never returns. 
> 
> If that ran in a thread, perhaps the thread died. 
> 
>> But the session still completely works, so I'm not really sure 
>> what's going on here. 
> 
> As long as the main thread runs, you might indeed see nothing 
> special. 

This was exactly my thought: a thread I'm not even aware of must 
be silently crashing and leaving GC in a bad state.

But there's only a single case of 'make-thread' in my ~/.emacs.d/, 
and it's extremely unlikely that function ever runs 
("lsp-download-install").

More importantly, I'm comparing (list-threads) in emacs and "info 
threads" in gdb, and the failed instance looks identical to the 
non-failed instances: a single emacs thread ("Main"), and three 
real threads ("emacs", "gmain", "gdbus").  garbage_collect() not 
present in any backtrace when interrupted.

I'm at a loss for how it teleported out of that garbage_collect() 
call.  Back to printf, I guess.  Maybe there was a short-lived 
thread that isn't normally running...

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13 19:59                                                                             ` Trevor Bentley
@ 2020-12-13 20:21                                                                               ` Eli Zaretskii
  2020-12-13 20:41                                                                                 ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-13 20:21 UTC (permalink / raw)
  To: Trevor Bentley
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: michael_heerdegen@web.de, monnier@iro.umontreal.ca, carlos@redhat.com,
>  fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 20:59:34 +0100
> 
> > As long as the main thread runs, you might indeed see nothing 
> > special. 
> 
> This was exactly my thought: a thread I'm not even aware of must 
> be silently crashing and leaving GC in a bad state.
> 
> But there's only a single case of 'make-thread' in my ~/.emacs.d/, 
> and it's extremely unlikely that function ever runs 
> ("lsp-download-install").
> 
> More importantly, I'm comparing (list-threads) in emacs and "info 
> threads" in gdb, and the failed instance looks identical to the 
> non-failed instances: a single emacs thread ("Main"), and three 
> real threads ("emacs", "gmain", "gdbus").  garbage_collect() not 
> present in any backtrace when interrupted.
> 
> I'm at a loss for how it teleported out of that garbage_collect() 
> call.  Back to printf, I guess.  Maybe there was a short-lived 
> thread that isn't normally running...

Does thread-last-error return something non-nil?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13 20:21                                                                               ` Eli Zaretskii
@ 2020-12-13 20:41                                                                                 ` Trevor Bentley
  2020-12-14  3:24                                                                                   ` Eli Zaretskii
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-13 20:41 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

> Does thread-last-error return something non-nil? 

Nope, nil in all instance, including the one in a weird state.

I'm running one instance with printfs in some of the 
unblock_input() functions, and one in gdb with breakpoints on 
Fmake_thread, pthread_create, and emacs_abort.  If you have other 
suggested probe points, I'm happy to test.

Opening 10 emacses at a time seems to be going better for 
reproducing.  Sometimes it triggers in an hour, sometimes in 3 
days, but if I just flood the system with emacs processes I tend 
to hit it within a day.

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-13 20:41                                                                                 ` Trevor Bentley
@ 2020-12-14  3:24                                                                                   ` Eli Zaretskii
  2020-12-14 21:24                                                                                     ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2020-12-14  3:24 UTC (permalink / raw)
  To: Trevor Bentley
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

> From: Trevor Bentley <trevor@trevorbentley.com>
> Cc: michael_heerdegen@web.de, monnier@iro.umontreal.ca, carlos@redhat.com,
>  fweimer@redhat.com, 43389@debbugs.gnu.org, bugs@gnu.support, dj@redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 21:41:40 +0100
> 
> > Does thread-last-error return something non-nil? 
> 
> Nope, nil in all instance, including the one in a weird state.

Then it's unlikely that a thread died unnatural death.

> I'm running one instance with printfs in some of the 
> unblock_input() functions, and one in gdb with breakpoints on 
> Fmake_thread, pthread_create, and emacs_abort.  If you have other 
> suggested probe points, I'm happy to test.

A breakpoint in watch_gc_cons_percentage, perhaps, to see if and when
the threshold gets changed?





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-14  3:24                                                                                   ` Eli Zaretskii
@ 2020-12-14 21:24                                                                                     ` Trevor Bentley
  2021-01-20 12:02                                                                                       ` Trevor Bentley
  0 siblings, 1 reply; 166+ messages in thread
From: Trevor Bentley @ 2020-12-14 21:24 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

>> > Does thread-last-error return something non-nil?  
>>  Nope, nil in all instance, including the one in a weird state. 
> 
> Then it's unlikely that a thread died unnatural death. 
> 

No, sure doesn't seem like it.  Just hit it in an instance with 
more printfs, and it looks like it leaps right out of some 
sub-call of process_pending_signals(), continuing to run elsewhere 
without finishing garbage_collect().  To me, that means exactly 
one thing: longjmp.

If something manages to longjmp out of garbage_collect() at that 
point, it leaves with consing_until_gc set to HI_THRESHOLD.  This 
must explain why automatic GC stops running for hours or days, but 
manual GCs still work.

I tried setting a breakpoint in longjmp, but it's called 3 times 
for every keypress!  That's inconvenient.  Running one single 
instance now with a conditional breakpoint on longjmp: it will 
break if longjmp is called while it's in unblock_input().

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2020-12-14 21:24                                                                                     ` Trevor Bentley
@ 2021-01-20 12:02                                                                                       ` Trevor Bentley
  2021-01-20 12:08                                                                                         ` Trevor Bentley
  2021-01-20 14:53                                                                                         ` Stefan Monnier
  0 siblings, 2 replies; 166+ messages in thread
From: Trevor Bentley @ 2021-01-20 12:02 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

> I tried setting a breakpoint in longjmp, but it's called 3 times 
> for every keypress!  That's inconvenient.  Running one single 
> instance now with a conditional breakpoint on longjmp: it will 
> break if longjmp is called while it's in unblock_input(). 

I disappeared for ages because... the problem disappeared.  I went 
a month without reproducing it, despite putting a hold on 
upgrading both system and emacs packages while debugging.  Very 
odd.

But today it appeared again.  And, for the first time, in a gdb 
session with breakpoints to confirm my theory.  I believe I've 
found the underlying issue.

If you have a look at this long backtrace, you can see that we are 
inside a garbage_collect call (frame #38).  An X11 focus event 
comes in, triggering a bunch of GTK/GDK/X calls.  Mysteriously, 
this leads to a maybe_quit() call which in turn calls longjmp(). 
longjmp jumps right out of the garbage collect, leaving it 
unfinished.

Internally in garbage_collect, consing_until_gc was set to the 
HI_THRESHOLD upper-bound.  It is left that way when longjmp leaps 
out of it, and no automatic garbage collect is ever performed 
again.  This is the start of the ballooning memory.

This also explains why my minimized emacs session never hits it 
and my work sessions hit it very often, and less often on 
weekends.  It's triggered by focus events.  I flitter around 
between windows constantly while working.

I don't know emacs internals, so you'll have to figure out if this 
is X dependent (probably) and/or GTK dependent.  It should be 
possible to come up with an easier way to reproduce it now.

Backtrace:
-----------
(gdb) bt #0  0x00007ffff5571230 in siglongjmp () at 
/usr/lib/libc.so.6 #1  0x00005555557bd38d in unwind_to_catch 
(catch=0x555555dfc320, type=NONLOCAL_EXIT_THROW, value=0x30) at 
eval.c:1181 #2  0x00005555557bd427 in Fthrow (tag=0xe75830, 
value=0x30) at eval.c:1198 #3  0x00005555557bdea7 in 
process_quit_flag () at eval.c:1526 #4  0x00005555557bdeef in 
maybe_quit () at eval.c:1547 #5  0x00005555557cbbb1 in Fassq 
(key=0xd0b0, alist=0x55555901c573) at fns.c:1609 #6 
0x0000555555632b63 in window_parameter (w=0x555555f2d088, 
parameter=0xd0b0) at window.c:2262 #7  0x000055555563a075 in 
window_wants_tab_line (w=0x555555f2d088) at window.c:5410 #8 
0x00005555555c22b1 in get_phys_cursor_geometry (w=0x555555f2d088, 
row=0x55555d9f3ef0, glyph=0x55555fd20e00, xp=0x7fffffff9c48, 
yp=0x7fffffff9c4c, heightp=0x7fffffff9c50) at xdisp.c:2650 #9 
0x00005555556c1b12 in x_draw_hollow_cursor (w=0x555555f2d088, 
row=0x55555d9f3ef0) at xterm.c:9495 #10 0x00005555556c24f9 in 
x_draw_window_cursor (w=0x555555f2d088, glyph_row=0x55555d9f3ef0, 
x=32, y=678, cursor_type=HOLLOW_BOX_CURSOR, cursor_width=1, 
on_p=true, active_p=false) at xterm.c:9682 #11 0x000055555561a922 
in display_and_set_cursor (w=0x555555f2d088, on=true, hpos=2, 
vpos=18, x=32, y=678) at xdisp.c:31738 #12 0x000055555561aa5b in 
update_window_cursor (w=0x555555f2d088, on=true) at xdisp.c:31773 
#13 0x000055555561aabf in update_cursor_in_window_tree 
(w=0x555555f2d088, on_p=true) at xdisp.c:31791 #14 
0x000055555561aaab in update_cursor_in_window_tree 
(w=0x55555907a490, on_p=true) at xdisp.c:31789 #15 
0x000055555561aaab in update_cursor_in_window_tree 
(w=0x55555a514b68, on_p=true) at xdisp.c:31789 #16 
0x000055555561ab37 in gui_update_cursor (f=0x555556625468, 
on_p=true) at xdisp.c:31805 #17 0x00005555556b9829 in 
x_frame_unhighlight (f=0x555556625468) at xterm.c:4490 #18 
0x00005555556ba22d in x_frame_rehighlight (dpyinfo=0x55555626d6c0) 
at xterm.c:4852 #19 0x00005555556b98fc in x_new_focus_frame 
(dpyinfo=0x55555626d6c0, frame=0x0) at xterm.c:4520 #20 
0x00005555556b9a3d in x_focus_changed (type=10, state=2, 
dpyinfo=0x55555626d6c0, frame=0x555556625468, bufp=0x7fffffffa0d0) 
at xterm.c:4554 #21 0x00005555556ba0a6 in x_detect_focus_change 
(dpyinfo=0x55555626d6c0, frame=0x555556625468, 
event=0x7fffffffa840, bufp=0x7fffffffa0d0) at xterm.c:4787 #22 
0x00005555556c0235 in handle_one_xevent (dpyinfo=0x55555626d6c0, 
event=0x7fffffffa840, finish=0x555555c901d4 <current_finish>, 
hold_quit=0x7fffffffab50) at xterm.c:8810 #23 0x00005555556bde28 
in event_handler_gdk (gxev=0x7fffffffa840, ev=0x55555cccf0c0, 
data=0x0) at xterm.c:7768 #24 0x00007ffff75f780f in  () at 
/usr/lib/libgdk-3.so.0 #25 0x00007ffff75fb3cb in  () at 
/usr/lib/libgdk-3.so.0 #26 0x00007ffff759f15b in 
gdk_display_get_event () at /usr/lib/libgdk-3.so.0 #27 
0x00007ffff75fb104 in  () at /usr/lib/libgdk-3.so.0 #28 
0x00007ffff6fcb8f4 in g_main_context_dispatch () at 
/usr/lib/libglib-2.0.so.0 #29 0x00007ffff701f821 in  () at 
/usr/lib/libglib-2.0.so.0 #30 0x00007ffff6fca121 in 
g_main_context_iteration () at /usr/lib/libglib-2.0.so.0 #31 
0x00007ffff784e2c7 in gtk_main_iteration () at 
/usr/lib/libgtk-3.so.0 #32 0x00005555556c1821 in XTread_socket 
(terminal=0x5555560b7460, hold_quit=0x7fffffffab50) at 
xterm.c:9395 #33 0x000055555570f3a2 in gobble_input () at 
keyboard.c:6890 #34 0x000055555570f894 in handle_async_input () at 
keyboard.c:7121 #35 0x000055555570f8dd in process_pending_signals 
() at keyboard.c:7139 #36 0x000055555570f9cf in unblock_input_to 
(level=0) at keyboard.c:7162 #37 0x000055555570fa4c in 
unblock_input () at keyboard.c:7187 #38 0x000055555578f49a in 
garbage_collect () at alloc.c:6121 #39 0x000055555578efe7 in 
maybe_garbage_collect () at alloc.c:5964 #40 0x00005555557bb292 in 
maybe_gc () at lisp.h:5041 #41 0x00005555557c12d6 in Ffuncall 
(nargs=2, args=0x7fffffffad68) at eval.c:2793 #42 
0x000055555580f7d6 in exec_byte_code
...  --------------

For breakpoints, I am doing the following:

1) make a global static variable in alloc.c:
static int enable_gc_trace = 0;

2) in garbage_collect(), 'enable_gc_trace++' when it starts and 
'enable_gc_trace--' when it ends.  I just wrapped the call to 
unblock_input(), but you could widen that window.

3) run in gdb with conditional breakpoints on GC and longjmp 
functions:
b siglongjmp if enable_gc_trace > 0
b internal_catch if enable_gc_trace > 0
b internal_catch_all if enable_gc_trace > 0
b maybe_garbage_collect if enable_gc_trace > 0

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2021-01-20 12:02                                                                                       ` Trevor Bentley
@ 2021-01-20 12:08                                                                                         ` Trevor Bentley
  2021-01-20 14:53                                                                                         ` Stefan Monnier
  1 sibling, 0 replies; 166+ messages in thread
From: Trevor Bentley @ 2021-01-20 12:08 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen, monnier

I'm incompetent at formatting e-mails.  Have a link to the 
backtrace instead:

https://trevorbentley.com/mtrace/backtrace.txt

-Trevor





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2021-01-20 12:02                                                                                       ` Trevor Bentley
  2021-01-20 12:08                                                                                         ` Trevor Bentley
@ 2021-01-20 14:53                                                                                         ` Stefan Monnier
  2021-01-20 15:32                                                                                           ` Eli Zaretskii
  1 sibling, 1 reply; 166+ messages in thread
From: Stefan Monnier @ 2021-01-20 14:53 UTC (permalink / raw)
  To: Trevor Bentley; +Cc: fweimer, 43389, dj, bugs, carlos, michael_heerdegen

> If you have a look at this long backtrace, you can see that we are inside
> a garbage_collect call (frame #38).  An X11 focus event comes in, triggering
> a bunch of GTK/GDK/X calls.  Mysteriously, this leads to a maybe_quit() call
> which in turn calls longjmp(). longjmp jumps right out of the garbage
> collect, leaving it unfinished.

Indeed, thanks!

> I don't know emacs internals, so you'll have to figure out if this is
> X dependent (probably) and/or GTK dependent.  It should be possible to come
> up with an easier way to reproduce it now.

The backtrace is clear enough, no need to reproduce it.

The GC properly speaking is actually finished at that point, BTW
(luckily: I think you'd have seen worse outcomes if that weren't the
case ;-).

I installed the simple patch below into `master.  It should fix the
immediate problem of failing to set consing_until_gc back to a sane
value and it should also fix the other immediate problem of getting to
`siglongjmp` from `unblock_input` via `window_parameter`.

Eli, do you think it should go to `emacs-27`?

> Backtrace:
> -----------
> (gdb) bt
> #0  0x00007ffff5571230 in siglongjmp () at /usr/lib/libc.so.6
> #1  0x00005555557bd38d in unwind_to_catch (catch=0x555555dfc320, type=NONLOCAL_EXIT_THROW, value=0x30) at eval.c:1181
> #2  0x00005555557bd427 in Fthrow (tag=0xe75830, value=0x30) at eval.c:1198
> #3  0x00005555557bdea7 in process_quit_flag () at eval.c:1526
> #4  0x00005555557bdeef in maybe_quit () at eval.c:1547
> #5  0x00005555557cbbb1 in Fassq (key=0xd0b0, alist=0x55555901c573) at fns.c:1609
> #6 0x0000555555632b63 in window_parameter (w=0x555555f2d088, parameter=0xd0b0) at window.c:2262
> #7 0x000055555563a075 in window_wants_tab_line (w=0x555555f2d088) at window.c:5410
> #8 0x00005555555c22b1 in get_phys_cursor_geometry (w=0x555555f2d088, row=0x55555d9f3ef0, glyph=0x55555fd20e00, xp=0x7fffffff9c48, yp=0x7fffffff9c4c, heightp=0x7fffffff9c50) at xdisp.c:2650
> #9 0x00005555556c1b12 in x_draw_hollow_cursor (w=0x555555f2d088, row=0x55555d9f3ef0) at xterm.c:9495
> #10 0x00005555556c24f9 in x_draw_window_cursor (w=0x555555f2d088, glyph_row=0x55555d9f3ef0, x=32, y=678, cursor_type=HOLLOW_BOX_CURSOR, cursor_width=1, on_p=true, active_p=false) at xterm.c:9682
> #11 0x000055555561a922 in display_and_set_cursor (w=0x555555f2d088, on=true, hpos=2, vpos=18, x=32, y=678) at xdisp.c:31738
> #12 0x000055555561aa5b in update_window_cursor (w=0x555555f2d088, on=true) at xdisp.c:31773
> #13 0x000055555561aabf in update_cursor_in_window_tree (w=0x555555f2d088, on_p=true) at xdisp.c:31791
> #14 0x000055555561aaab in update_cursor_in_window_tree (w=0x55555907a490, on_p=true) at xdisp.c:31789
> #15 0x000055555561aaab in update_cursor_in_window_tree (w=0x55555a514b68, on_p=true) at xdisp.c:31789
> #16 0x000055555561ab37 in gui_update_cursor (f=0x555556625468, on_p=true) at xdisp.c:31805
> #17 0x00005555556b9829 in x_frame_unhighlight (f=0x555556625468) at xterm.c:4490
> #18 0x00005555556ba22d in x_frame_rehighlight (dpyinfo=0x55555626d6c0) at xterm.c:4852
> #19 0x00005555556b98fc in x_new_focus_frame (dpyinfo=0x55555626d6c0, frame=0x0) at xterm.c:4520
> #20 0x00005555556b9a3d in x_focus_changed (type=10, state=2, dpyinfo=0x55555626d6c0, frame=0x555556625468, bufp=0x7fffffffa0d0) at xterm.c:4554
> #21 0x00005555556ba0a6 in x_detect_focus_change (dpyinfo=0x55555626d6c0, frame=0x555556625468, event=0x7fffffffa840, bufp=0x7fffffffa0d0) at xterm.c:4787
> #22 0x00005555556c0235 in handle_one_xevent (dpyinfo=0x55555626d6c0, event=0x7fffffffa840, finish=0x555555c901d4 <current_finish>, hold_quit=0x7fffffffab50) at xterm.c:8810
> #23 0x00005555556bde28 in event_handler_gdk (gxev=0x7fffffffa840, ev=0x55555cccf0c0, data=0x0) at xterm.c:7768
> #24 0x00007ffff75f780f in  () at /usr/lib/libgdk-3.so.0
> #25 0x00007ffff75fb3cb in  () at /usr/lib/libgdk-3.so.0
> #26 0x00007ffff759f15b in gdk_display_get_event () at /usr/lib/libgdk-3.so.0
> #27 0x00007ffff75fb104 in  () at /usr/lib/libgdk-3.so.0
> #28 0x00007ffff6fcb8f4 in g_main_context_dispatch () at /usr/lib/libglib-2.0.so.0
> #29 0x00007ffff701f821 in  () at /usr/lib/libglib-2.0.so.0
> #30 0x00007ffff6fca121 in g_main_context_iteration () at /usr/lib/libglib-2.0.so.0
> #31 0x00007ffff784e2c7 in gtk_main_iteration () at /usr/lib/libgtk-3.so.0
> #32 0x00005555556c1821 in XTread_socket (terminal=0x5555560b7460, hold_quit=0x7fffffffab50) at xterm.c:9395
> #33 0x000055555570f3a2 in gobble_input () at keyboard.c:6890
> #34 0x000055555570f894 in handle_async_input () at keyboard.c:7121
> #35 0x000055555570f8dd in process_pending_signals () at keyboard.c:7139
> #36 0x000055555570f9cf in unblock_input_to (level=0) at keyboard.c:7162
> #37 0x000055555570fa4c in unblock_input () at keyboard.c:7187
> #38 0x000055555578f49a in garbage_collect () at alloc.c:6121
> #39 0x000055555578efe7 in maybe_garbage_collect () at alloc.c:5964
> #40 0x00005555557bb292 in maybe_gc () at lisp.h:5041
> #41 0x00005555557c12d6 in Ffuncall (nargs=2, args=0x7fffffffad68) at eval.c:2793
> #42 0x000055555580f7d6 in exec_byte_code
> ...  --------------

Of course, there might be other places where we could get to
`maybe_quit` from `XTread_socket`, given the enormous amount of code it
can execute.  :-(


        Stefan


diff --git a/src/alloc.c b/src/alloc.c
index c0a55e61b9..b86ed4ed26 100644
--- a/src/alloc.c
+++ b/src/alloc.c
@@ -6101,11 +6101,13 @@ garbage_collect (void)
 
   gc_in_progress = 0;
 
-  unblock_input ();
-
   consing_until_gc = gc_threshold
     = consing_threshold (gc_cons_threshold, Vgc_cons_percentage, 0);
 
+  /* Unblock *after* re-setting `consing_until_gc` in case `unblock_input`
+     signals an error (see bug#43389).  */
+  unblock_input ();
+
   if (garbage_collection_messages && NILP (Vmemory_full))
     {
       if (message_p || minibuf_level > 0)
diff --git a/src/window.c b/src/window.c
index e025e0b082..eb16e2a433 100644
--- a/src/window.c
+++ b/src/window.c
@@ -2260,7 +2260,7 @@ DEFUN ("window-parameters", Fwindow_parameters, Swindow_parameters,
 Lisp_Object
 window_parameter (struct window *w, Lisp_Object parameter)
 {
-  Lisp_Object result = Fassq (parameter, w->window_parameters);
+  Lisp_Object result = assq_no_quit (parameter, w->window_parameters);
 
   return CDR_SAFE (result);
 }






^ permalink raw reply related	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2021-01-20 14:53                                                                                         ` Stefan Monnier
@ 2021-01-20 15:32                                                                                           ` Eli Zaretskii
  2021-01-20 15:40                                                                                             ` Stefan Monnier
  0 siblings, 1 reply; 166+ messages in thread
From: Eli Zaretskii @ 2021-01-20 15:32 UTC (permalink / raw)
  To: Stefan Monnier
  Cc: fweimer, 43389, dj, bugs, michael_heerdegen, trevor, carlos

> From: Stefan Monnier <monnier@iro.umontreal.ca>
> Cc: Eli Zaretskii <eliz@gnu.org>,  michael_heerdegen@web.de,
>   carlos@redhat.com,  fweimer@redhat.com,  43389@debbugs.gnu.org,
>   bugs@gnu.support,  dj@redhat.com
> Date: Wed, 20 Jan 2021 09:53:08 -0500
> 
> > I don't know emacs internals, so you'll have to figure out if this is
> > X dependent (probably) and/or GTK dependent.  It should be possible to come
> > up with an easier way to reproduce it now.
> 
> The backtrace is clear enough, no need to reproduce it.

Indeed.

> I installed the simple patch below into `master.  It should fix the
> immediate problem of failing to set consing_until_gc back to a sane
> value and it should also fix the other immediate problem of getting to
> `siglongjmp` from `unblock_input` via `window_parameter`.
> 
> Eli, do you think it should go to `emacs-27`?

Definitely, thanks.





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2021-01-20 15:32                                                                                           ` Eli Zaretskii
@ 2021-01-20 15:40                                                                                             ` Stefan Monnier
  2020-09-12  2:12                                                                                               ` bug#43395: 28.0.50; memory leak Madhu
  2021-01-20 15:49                                                                                               ` bug#43389: 28.0.50; Emacs memory leaks using hard disk all time Trevor Bentley
  0 siblings, 2 replies; 166+ messages in thread
From: Stefan Monnier @ 2021-01-20 15:40 UTC (permalink / raw)
  To: Eli Zaretskii
  Cc: fweimer, dj, bugs, michael_heerdegen, trevor, carlos, 43389-done

>> Eli, do you think it should go to `emacs-27`?
> Definitely, thanks.

OK, done.

Trevor: I marked this bug as closed under the assumption that this
problem is solved, but of course, if it re-occurs feel free to re-open
(ideally while running under GDB in a similar setup, so we get a clear
backtrace again ;-)



        Stefan






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
  2021-01-20 15:40                                                                                             ` Stefan Monnier
  2020-09-12  2:12                                                                                               ` bug#43395: 28.0.50; memory leak Madhu
@ 2021-01-20 15:49                                                                                               ` Trevor Bentley
  1 sibling, 0 replies; 166+ messages in thread
From: Trevor Bentley @ 2021-01-20 15:49 UTC (permalink / raw)
  To: Stefan Monnier, Eli Zaretskii
  Cc: fweimer, , dj, bugs, carlos, michael_heerdegen, 43389-done

Stefan Monnier <monnier@iro.umontreal.ca> writes:

> Trevor: I marked this bug as closed under the assumption that 
> this problem is solved, but of course, if it re-occurs feel free 
> to re-open (ideally while running under GDB in a similar setup, 
> so we get a clear backtrace again ;-) 

Agreed.

And thanks to everyone for all of the help!  I very much look 
forward to having long-lived emacs processes again :)

-Trevor






^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: bug#43395: closed
       [not found]                                                                                                 ` <handler.43395.D43389.161115724232582.notifdone@debbugs.gnu.org>
@ 2021-02-06 16:25                                                                                                   ` Madhu
  2021-03-21 14:10                                                                                                     ` Madhu
  0 siblings, 1 reply; 166+ messages in thread
From: Madhu @ 2021-02-06 16:25 UTC (permalink / raw)
  To: 43389

I think I am facing the problem again presently:

GNU Emacs 28.0.50 (build 2, x86_64-pc-linux-gnu, GTK+ Version 3.24.24,
cairo version 1.16.0) of 2021-01-21 (pgtk branch; i think the
corresponding commit on master was 8b33b76eb9fb)

  PID  %MEM    VIRT   SWAP    RES   CODE    DATA    SHR nMaj OOMs nDRT  %CPU COMMAND
 9912  17.8   81.8g      0   1.3g   2916   49.3g  10976  48k  732    0   0.0 emacs

I was able to get a M-x memory-report and M-x memory-usage (88.7 MiB
Overall Object Memory Usage) but I couldn't get a M-x malloc-info as
this was started --daemon.  Unfortunately I botched up and killed the
emacs process when trying to open a file and redirect malloc_info to
it in gdb.  I didn't check gc-cons-threshold gc-cons-percentage but I
did kill all buffers and did a few manual gc-s so i think those were
normal.

Were the paths leading to the code which was fixed understood?  (on
another note perhaps malloc_trim could be introduced into the gc via
an optional path?)





^ permalink raw reply	[flat|nested] 166+ messages in thread

* bug#43389: bug#43395: closed
  2021-02-06 16:25                                                                                                   ` bug#43389: bug#43395: closed Madhu
@ 2021-03-21 14:10                                                                                                     ` Madhu
  0 siblings, 0 replies; 166+ messages in thread
From: Madhu @ 2021-03-21 14:10 UTC (permalink / raw)
  To: 43389

[-- Attachment #1: Type: Text/Plain, Size: 1310 bytes --]

I think this dragon has not been put to sleep yet.  I ran into the
problem again - quite quickly within some 5 hours of emacs uptime

GNU Emacs 28.0.50 (build 1, x86_64-pc-linux-gnu, Motif Version 2.3.8,
cairo version 1.16.0) of 2021-03-08 (master commit a190bc9f3 - with
the motif removal reverted.)

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
21301 madhu     20   0 2988364   2.7g   0.0  36.7   5:04.01 S emacs

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
madhu    21301  1.9 36.7 2988364 2809536 pts/2 Ssl+ 14:06   5:03 /12/build/emacs/build-motif/src/emacs -nw

A full gc does not release the resident memory.

I had an emacs -nw session and one X emacsclient session.  I was
prompted for a passwd by mew in the gui frame, and the prompt appeared
on the tty frame - which was where I entered the password. Then I
noticed the cpu temperature was up, and and a Ctrl-G on emacs stopped
that. I think the leak may have occured then but I didn't notice it
until later. When I did notice it i killed all the buffers did a gc
and ran the memory and malloc reports - which I'm attaching here - in
case it gives any clues.

The emacs command line was:

TERM=xterm-256color MALLOC_ARENA_MAX=2 exec /12/build/emacs/build-motif/src/emacs -nw" > ~/emacs.log 2>&1

[-- Attachment #2: memory-usage.txt --]
[-- Type: Text/Plain, Size: 905 bytes --]

Garbage collection stats:
((conses 16 788433 262686) (symbols 48 72449 79) (strings 32 397265 20150) (string-bytes 1 26247796) (vectors 16 88641) (vector-slots 8 1961725 208444) (floats 8 1191 1377) (intervals 56 9514 5343) (buffers 992 8))

 =>	12.0MB (+ 4.01MB dead) in conses
	3.32MB (+ 3.70kB dead) in symbols
	12.1MB (+  630kB dead) in strings
	25.0MB in string-bytes
	1.35MB in vectors
	15.0MB (+ 1.59MB dead) in vector-slots
	9.30kB (+ 10.8kB dead) in floats
	 520kB (+  292kB dead) in intervals
	7.75kB in buffers

Total in lisp objects: 75.9MB (live 69.3MB, dead 6.51MB)

Buffer ralloc memory usage:
8 buffers
16.2kB total (14.0kB in gaps)
      Size	Gap	Name

      1277	753	memory-report.txt
       670	1575	*Buffer Details*
       274	5855	*Ibuffer*
       103	1918	*Messages*
        35	2002	 *Echo Area 0*
         0	2087	 *Minibuf-1*
         0	20	 *Minibuf-0*
         0	20	 *Echo Area 1*

[-- Attachment #3: memory-report.txt --]
[-- Type: Text/Plain, Size: 1277 bytes --]

Estimated Emacs Memory Usage

  69.3 MiB  Overall Object Memory Usage
  11.1 MiB  Memory Used By Global Variables
   6.6 MiB  Reserved (But Unused) Object Memory
   5.5 MiB  Memory Used By Symbol Plists
  61.7 KiB  Total Buffer Memory Usage
   1.2 KiB  Total Image Cache Size

Object Storage

  37.2 MiB  Strings
  16.3 MiB  Vectors
  12.0 MiB  Conses
   3.3 MiB  Symbols
 514.2 KiB  Intervals
   9.3 KiB  Floats
   6.8 KiB  Buffer-Objects

Largest Buffers

  31.5 KiB   *Minibuf-1*
  25.1 KiB  *Ibuffer*
   2.1 KiB   *Echo Area 0*
   1.3 KiB  *Memory Report*
   1.2 KiB  *Messages*
   0.3 KiB   *Minibuf-0*
   0.2 KiB   *Echo Area 1*

Largest Variables

   1.4 MiB  load-history
   1.2 MiB  $portage-category-package-names
 951.6 KiB  +lw-manual-data-7-1-0-0+
 574.5 KiB  ivy--all-candidates
 491.4 KiB  command-history
 296.5 KiB  face-new-frame-defaults
 282.5 KiB  help-definition-prefixes
 236.3 KiB  obarray
 143.1 KiB  org-entities
 137.5 KiB  save-place-alist
  92.5 KiB  global-map
  92.5 KiB  widget-global-map
  89.3 KiB  bibtex-biblatex-entry-alist
  84.4 KiB  buffer-name-history
  83.2 KiB  lw::manual-symbols
  82.2 KiB  gnus-summary-mode-map
  80.9 KiB  coding-system-alist
  79.0 KiB  shortdoc--groups
  77.3 KiB  ivy-history
  74.7 KiB  ivy--virtual-buffers


[-- Attachment #4: malloc-info.txt --]
[-- Type: Text/Plain, Size: 7925 bytes --]

<malloc version="1">
<heap nr="0">
<sizes>
  <size from="17" to="32" total="4992" count="156"/>
  <size from="33" to="48" total="96" count="2"/>
  <size from="49" to="64" total="189824" count="2966"/>
  <size from="65" to="80" total="12640" count="158"/>
  <size from="81" to="96" total="576" count="6"/>
  <size from="97" to="112" total="448" count="4"/>
  <size from="33" to="33" total="8778" count="266"/>
  <size from="49" to="49" total="686" count="14"/>
  <size from="193" to="193" total="6369" count="33"/>
  <size from="209" to="209" total="5225" count="25"/>
  <size from="225" to="225" total="5400" count="24"/>
  <size from="241" to="241" total="241" count="1"/>
  <size from="257" to="257" total="15677" count="61"/>
  <size from="273" to="273" total="6825" count="25"/>
  <size from="289" to="289" total="7225" count="25"/>
  <size from="305" to="305" total="915" count="3"/>
  <size from="321" to="321" total="21507" count="67"/>
  <size from="337" to="337" total="6740" count="20"/>
  <size from="353" to="353" total="3530" count="10"/>
  <size from="369" to="369" total="1845" count="5"/>
  <size from="385" to="385" total="20790" count="54"/>
  <size from="401" to="401" total="4010" count="10"/>
  <size from="417" to="417" total="2085" count="5"/>
  <size from="433" to="433" total="2165" count="5"/>
  <size from="449" to="449" total="16164" count="36"/>
  <size from="465" to="465" total="2325" count="5"/>
  <size from="481" to="481" total="3848" count="8"/>
  <size from="497" to="497" total="1491" count="3"/>
  <size from="513" to="513" total="15903" count="31"/>
  <size from="529" to="529" total="5819" count="11"/>
  <size from="545" to="545" total="4360" count="8"/>
  <size from="561" to="561" total="2805" count="5"/>
  <size from="577" to="577" total="21926" count="38"/>
  <size from="593" to="593" total="4151" count="7"/>
  <size from="609" to="609" total="4263" count="7"/>
  <size from="625" to="625" total="625" count="1"/>
  <size from="641" to="641" total="16666" count="26"/>
  <size from="657" to="657" total="24966" count="38"/>
  <size from="673" to="673" total="4711" count="7"/>
  <size from="689" to="689" total="4134" count="6"/>
  <size from="705" to="705" total="12690" count="18"/>
  <size from="721" to="721" total="8652" count="12"/>
  <size from="737" to="737" total="6633" count="9"/>
  <size from="753" to="753" total="753" count="1"/>
  <size from="769" to="769" total="9228" count="12"/>
  <size from="785" to="785" total="3140" count="4"/>
  <size from="801" to="801" total="4806" count="6"/>
  <size from="817" to="817" total="817" count="1"/>
  <size from="833" to="833" total="4165" count="5"/>
  <size from="849" to="849" total="10188" count="12"/>
  <size from="865" to="865" total="3460" count="4"/>
  <size from="881" to="881" total="2643" count="3"/>
  <size from="897" to="897" total="29601" count="33"/>
  <size from="913" to="913" total="2739" count="3"/>
  <size from="929" to="929" total="1858" count="2"/>
  <size from="945" to="945" total="9450" count="10"/>
  <size from="961" to="961" total="23064" count="24"/>
  <size from="977" to="977" total="18563" count="19"/>
  <size from="993" to="993" total="4965" count="5"/>
  <size from="1009" to="1009" total="94846" count="94"/>
  <size from="1025" to="1073" total="442846" count="430"/>
  <size from="1089" to="1137" total="94742" count="86"/>
  <size from="1153" to="1201" total="32700" count="28"/>
  <size from="1217" to="1249" total="29432" count="24"/>
  <size from="1281" to="1329" total="32617" count="25"/>
  <size from="1345" to="1393" total="20495" count="15"/>
  <size from="1409" to="1457" total="24369" count="17"/>
  <size from="1473" to="1521" total="16459" count="11"/>
  <size from="1537" to="1585" total="20317" count="13"/>
  <size from="1601" to="1649" total="19388" count="12"/>
  <size from="1665" to="1713" total="11783" count="7"/>
  <size from="1729" to="1777" total="8757" count="5"/>
  <size from="1793" to="1841" total="16377" count="9"/>
  <size from="1857" to="1905" total="15016" count="8"/>
  <size from="1921" to="1969" total="33153" count="17"/>
  <size from="1985" to="2033" total="68418" count="34"/>
  <size from="2049" to="2097" total="205492" count="100"/>
  <size from="2113" to="2161" total="89514" count="42"/>
  <size from="2177" to="2225" total="30782" count="14"/>
  <size from="2241" to="2289" total="27068" count="12"/>
  <size from="2305" to="2353" total="34799" count="15"/>
  <size from="2369" to="2417" total="28748" count="12"/>
  <size from="2433" to="2481" total="12277" count="5"/>
  <size from="2497" to="2529" total="17623" count="7"/>
  <size from="2561" to="2609" total="18119" count="7"/>
  <size from="2689" to="2737" total="16230" count="6"/>
  <size from="2753" to="2785" total="19431" count="7"/>
  <size from="2817" to="2865" total="17094" count="6"/>
  <size from="2881" to="2929" total="8723" count="3"/>
  <size from="2945" to="2993" total="29706" count="10"/>
  <size from="3009" to="3057" total="36524" count="12"/>
  <size from="3073" to="3121" total="101665" count="33"/>
  <size from="3137" to="3553" total="293369" count="89"/>
  <size from="3585" to="4081" total="163002" count="42"/>
  <size from="4097" to="4561" total="345522" count="82"/>
  <size from="4641" to="5105" total="166258" count="34"/>
  <size from="5121" to="5617" total="185635" count="35"/>
  <size from="5633" to="6113" total="100273" count="17"/>
  <size from="6145" to="6641" total="170315" count="27"/>
  <size from="6705" to="7153" total="61977" count="9"/>
  <size from="7169" to="7553" total="167719" count="23"/>
  <size from="7777" to="8177" total="95996" count="12"/>
  <size from="8193" to="8673" total="602825" count="73"/>
  <size from="8737" to="9201" total="72424" count="8"/>
  <size from="9217" to="9713" total="437983" count="47"/>
  <size from="9729" to="9985" total="88681" count="9"/>
  <size from="11473" to="11473" total="11473" count="1"/>
  <size from="13201" to="16369" total="1057223" count="71"/>
  <size from="16401" to="20321" total="2678684" count="156"/>
  <size from="20497" to="24513" total="617340" count="28"/>
  <size from="24657" to="28433" total="479890" count="18"/>
  <size from="28929" to="32529" total="368252" count="12"/>
  <size from="32801" to="36833" total="480238" count="14"/>
  <size from="37089" to="40241" total="196741" count="5"/>
  <size from="42017" to="65249" total="1370090" count="26"/>
  <size from="65649" to="92913" total="1487091" count="19"/>
  <size from="101089" to="115713" total="440132" count="4"/>
  <size from="141361" to="157793" total="299154" count="2"/>
  <size from="163889" to="230721" total="1753993" count="9"/>
  <size from="324161" to="468993" total="2808135" count="7"/>
  <size from="545265" to="1799181393" total="2731856606" count="30"/>
  <unsorted from="129" to="16929" total="163118" count="30"/>
</sizes>
<total type="fast" count="3292" size="208576"/>
<total type="rest" count="3139" size="2751257554"/>
<system type="current" size="2856476672"/>
<system type="max" size="2856476672"/>
<aspace type="total" size="2856476672"/>
<aspace type="mprotect" size="2856476672"/>
</heap>
<heap nr="1">
<sizes>
  <size from="17" to="32" total="992" count="31"/>
  <size from="33" to="48" total="240" count="5"/>
  <size from="97" to="112" total="112" count="1"/>
</sizes>
<total type="fast" count="37" size="1344"/>
<total type="rest" count="1" size="96656"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<total type="fast" count="3329" size="209920"/>
<total type="rest" count="3140" size="2751354210"/>
<total type="mmap" count="2" size="692224"/>
<system type="current" size="2856611840"/>
<system type="max" size="2856611840"/>
<aspace type="total" size="2856611840"/>
<aspace type="mprotect" size="2856611840"/>
</malloc>

^ permalink raw reply	[flat|nested] 166+ messages in thread

end of thread, other threads:[~2021-03-21 14:10 UTC | newest]

Thread overview: 166+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-11-15 14:55 bug#44666: 28.0.50; malloc-info: Emacs became not responsive, using hard disk all time Jean Louis
2020-11-16 16:11 ` Eli Zaretskii
2020-11-16 16:17   ` Jean Louis
2020-11-17 15:04     ` Eli Zaretskii
2020-11-19  6:59       ` Jean Louis
2020-11-19 14:37         ` bug#43389: 28.0.50; Emacs memory leaks " Eli Zaretskii
2020-11-20  3:16           ` Jean Louis
2020-11-20  8:10             ` Eli Zaretskii
2020-11-22 19:52               ` Jean Louis
2020-11-22 20:16                 ` Eli Zaretskii
2020-11-23  3:41                   ` Carlos O'Donell
2020-11-23  8:11                   ` Jean Louis
2020-11-23  9:59                     ` Eli Zaretskii
2020-11-23 17:19                       ` Arthur Miller
2020-11-23 17:44                         ` Eli Zaretskii
2020-11-23 18:34                           ` Arthur Miller
2020-11-23 19:06                             ` Jean Louis
2020-11-23 19:15                             ` Eli Zaretskii
2020-11-23 19:49                               ` Arthur Miller
2020-11-23 20:04                                 ` Eli Zaretskii
2020-11-23 21:12                                   ` Arthur Miller
2020-11-24  2:07                                   ` Arthur Miller
2020-11-23 20:31                                 ` Jean Louis
2020-11-23 21:22                                   ` Arthur Miller
2020-11-24  5:29                                     ` Jean Louis
2020-11-24  8:15                                       ` Arthur Miller
2020-11-24  9:06                                         ` Jean Louis
2020-11-24  9:27                                           ` Arthur Miller
2020-11-24 17:18                                             ` Jean Louis
2020-11-25 14:59                                               ` Arthur Miller
2020-11-25 15:09                                                 ` Jean Louis
2020-11-23 13:27                   ` Jean Louis
2020-11-23 15:54                     ` Carlos O'Donell
2020-11-23 18:58                       ` Jean Louis
2020-11-23 19:34                         ` Eli Zaretskii
2020-11-23 19:49                           ` Jean Louis
2020-11-23 20:04                           ` Carlos O'Donell
2020-11-23 20:16                             ` Eli Zaretskii
2020-11-23 19:37                         ` Carlos O'Donell
2020-11-23 19:55                           ` Jean Louis
2020-11-23 20:06                             ` Carlos O'Donell
2020-11-23 20:18                               ` Jean Louis
2020-11-23 20:31                                 ` Eli Zaretskii
2020-11-23 20:41                                   ` Jean Louis
2020-11-23 20:53                                     ` Andreas Schwab
2020-11-23 21:09                                       ` Jean Louis
2020-11-24  3:25                                       ` Eli Zaretskii
2020-11-23 20:10                             ` Eli Zaretskii
2020-11-23 19:50                     ` Carlos O'Donell
2020-11-23 19:59                       ` Jean Louis
2020-11-23 10:59               ` Jean Louis
2020-11-23 15:46                 ` Eli Zaretskii
2020-11-23 17:29                   ` Arthur Miller
2020-11-23 17:45                     ` Eli Zaretskii
2020-11-23 18:40                       ` Arthur Miller
2020-11-23 19:23                         ` Eli Zaretskii
2020-11-23 19:38                           ` Arthur Miller
2020-11-23 19:52                             ` Eli Zaretskii
2020-11-23 20:03                               ` Arthur Miller
2020-11-23 19:39                           ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
2020-11-23 19:59                             ` Arthur Miller
2020-11-23 20:15                               ` Eli Zaretskii
2020-11-23 21:15                                 ` Arthur Miller
2020-11-23 20:53                               ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
2020-11-23 18:33                   ` Jean Louis
2020-11-23 21:30                   ` Trevor Bentley
2020-11-23 22:11                     ` Trevor Bentley
2020-11-24 16:07                     ` Eli Zaretskii
2020-11-24 19:05                       ` Trevor Bentley
2020-11-24 19:35                         ` Eli Zaretskii
2020-11-25 10:22                           ` Trevor Bentley
2020-11-25 17:47                             ` Eli Zaretskii
2020-11-25 19:06                               ` Trevor Bentley
2020-11-25 19:22                                 ` Eli Zaretskii
2020-11-25 19:38                                   ` Trevor Bentley
2020-11-25 20:02                                     ` Eli Zaretskii
2020-11-25 20:43                                       ` Trevor Bentley
2020-11-25 17:48                           ` Carlos O'Donell
2020-11-25 17:45                       ` Carlos O'Donell
2020-11-25 18:03                         ` Eli Zaretskii
2020-11-25 18:57                           ` Carlos O'Donell
2020-11-25 19:13                             ` Eli Zaretskii
2020-11-26  9:09                           ` Jean Louis
2020-11-26 14:13                             ` Eli Zaretskii
2020-11-26 18:37                               ` Jean Louis
2020-11-27  5:08                                 ` Carlos O'Donell
2020-11-25 18:08                         ` Jean Louis
2020-11-25 18:51                           ` Trevor Bentley
2020-11-25 19:02                             ` Carlos O'Donell
2020-11-25 19:17                               ` Trevor Bentley
2020-11-25 20:51                                 ` Carlos O'Donell
2020-11-26 13:58                                   ` Eli Zaretskii
2020-11-26 20:21                                     ` Carlos O'Donell
2020-11-26 20:30                                       ` Eli Zaretskii
2020-11-27  5:04                                         ` Carlos O'Donell
2020-11-27  7:40                                           ` Eli Zaretskii
2020-11-27  7:52                                             ` Eli Zaretskii
2020-11-27  8:20                                               ` Eli Zaretskii
2020-11-28  9:00                                                 ` Eli Zaretskii
2020-11-28 10:45                                                   ` Jean Louis
2020-11-28 17:49                                                   ` Trevor Bentley
2020-11-30 17:17                                                     ` Trevor Bentley
2020-11-30 18:15                                                       ` Eli Zaretskii
2020-11-30 18:33                                                         ` Trevor Bentley
2020-11-30 19:02                                                           ` Eli Zaretskii
2020-11-30 19:17                                                             ` Jean Louis
2020-12-01 10:14                                                               ` Trevor Bentley
2020-12-01 10:33                                                                 ` Jean Louis
2020-12-01 16:00                                                               ` Eli Zaretskii
2020-12-01 16:14                                                                 ` Andrea Corallo via Bug reports for GNU Emacs, the Swiss army knife of text editors
2020-12-08 21:50                                                       ` Trevor Bentley
2020-12-08 22:12                                                         ` Carlos O'Donell
2020-12-10 18:45                                                         ` Eli Zaretskii
2020-12-10 19:21                                                           ` Stefan Monnier
2020-12-10 19:33                                                             ` Trevor Bentley
2020-12-10 19:47                                                               ` Stefan Monnier
2020-12-10 20:26                                                             ` Jean Louis
2020-12-10 20:30                                                             ` Jean Louis
2020-12-12 11:20                                                             ` Trevor Bentley
2020-12-12 11:40                                                               ` Eli Zaretskii
2020-12-12 19:14                                                                 ` Stefan Monnier
2020-12-12 19:20                                                                   ` Eli Zaretskii
2020-12-12 19:46                                                                     ` Stefan Monnier
2020-12-12 19:51                                                                       ` Eli Zaretskii
2020-12-12 20:14                                                                         ` Trevor Bentley
2020-12-12 22:16                                                                 ` Michael Heerdegen
2020-12-13  3:34                                                                   ` Eli Zaretskii
2020-12-13 10:20                                                                     ` Trevor Bentley
2020-12-13 15:30                                                                       ` Eli Zaretskii
2020-12-13 19:34                                                                         ` Trevor Bentley
2020-12-13 19:38                                                                           ` Eli Zaretskii
2020-12-13 19:59                                                                             ` Trevor Bentley
2020-12-13 20:21                                                                               ` Eli Zaretskii
2020-12-13 20:41                                                                                 ` Trevor Bentley
2020-12-14  3:24                                                                                   ` Eli Zaretskii
2020-12-14 21:24                                                                                     ` Trevor Bentley
2021-01-20 12:02                                                                                       ` Trevor Bentley
2021-01-20 12:08                                                                                         ` Trevor Bentley
2021-01-20 14:53                                                                                         ` Stefan Monnier
2021-01-20 15:32                                                                                           ` Eli Zaretskii
2021-01-20 15:40                                                                                             ` Stefan Monnier
2020-09-12  2:12                                                                                               ` bug#43395: 28.0.50; memory leak Madhu
2020-09-14 15:08                                                                                                 ` Eli Zaretskii
2020-09-15  1:23                                                                                                   ` Madhu
     [not found]                                                                                                 ` <handler.43395.D43389.161115724232582.notifdone@debbugs.gnu.org>
2021-02-06 16:25                                                                                                   ` bug#43389: bug#43395: closed Madhu
2021-03-21 14:10                                                                                                     ` Madhu
2021-01-20 15:49                                                                                               ` bug#43389: 28.0.50; Emacs memory leaks using hard disk all time Trevor Bentley
2020-12-10 20:24                                                           ` Jean Louis
2020-12-12  1:28                                                           ` Jean Louis
2020-12-12  8:49                                                             ` Andreas Schwab
2020-12-03  6:30                                                   ` Jean Louis
2020-11-28 17:31                                               ` Trevor Bentley
2020-11-27 15:33                                           ` Eli Zaretskii
2020-12-08 22:15                                             ` Carlos O'Donell
2020-11-25 19:01                           ` Carlos O'Donell
2020-11-26 12:37                         ` Trevor Bentley
2020-11-26 14:30                           ` Eli Zaretskii
2020-11-26 15:19                             ` Trevor Bentley
2020-11-26 15:31                               ` Eli Zaretskii
2020-11-27  4:54                               ` Carlos O'Donell
2020-11-27  8:44                                 ` Jean Louis
2020-11-26 18:25                             ` Jean Louis
2020-11-27  4:55                               ` Carlos O'Donell
2020-11-23  3:35             ` Carlos O'Donell
2020-11-23 11:07               ` Jean Louis
2020-11-19  7:43       ` bug#44666: 28.0.50; malloc-info: Emacs became not responsive, " Jean Louis

Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.