From: dkcombs@panix.com (David Combs)
Subject: garbage collecting! How to expand physical-mem used?
Date: 27 Jul 2004 01:29:36 -0400 [thread overview]
Message-ID: <ce4p80$epf$1@panix3.panix.com> (raw)
garbage collecting! How to expand physical-mem used?
My god, does my emacs thrash! (I have it typing out whenever a
gc happens).
(I keep a *huge* *Buffer List*, .emacs.desktop, etc)
Anyway, while doing an ediff-buffers, the thrashing was really
heavy. I wondered if I perhaps had lots of physical-memory
left unused on the computer -- so, while in one window ("frame",
I guess) running emacs thrashing, in another dtterm I did a
vmstat 2, and here it is.
Looks to me that although emacs is thrashing all to beat hell,
the computer itself is not. (Do I read the vmstat stuff correctly?)
If so, the questions is, how to get emacs to grab a larger working
set (or whatever it's called -- more physical memory).
Any ideas?
Thanks!
Here's the vmstat:
kthr memory page disk faults cpu
...
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 405 236 1 0 99
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 281 216 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 402 284 226 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 403 224 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 281 216 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 388 238 1 0 99
1 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 405 475 246 3 0 97
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd dd f0 s2 in sy cs us sy id
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 402 297 225 0 0 100
0 0 0 902088 135560 8 102 0 0 0 0 0 0 0 0 0 410 762 327 16 5 79
0 0 0 902088 135560 8 100 0 0 0 0 0 0 0 0 0 407 720 307 15 5 79
0 0 0 902088 135560 4 61 0 0 0 0 0 0 0 0 0 405 492 226 95 5 0
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 405 487 255 41 2 57
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 436 235 0 5 94
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 285 214 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 402 301 225 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 276 217 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 402 282 211 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 280 214 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 282 213 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 367 222 1 0 99
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 405 517 258 3 0 97
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 402 304 216 0 0 100
0 0 0 902088 135560 4 62 0 0 0 0 0 0 0 0 0 406 615 279 51 5 44
0 0 0 902088 135560 4 61 0 0 0 0 0 0 0 0 0 405 408 223 42 3 54
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 407 583 303 15 0 85
0 0 0 902088 135560 4 61 0 0 0 0 0 0 0 0 0 406 732 269 64 5 31
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd dd f0 s2 in sy cs us sy id
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 420 238 78 0 22
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 424 254 6 4 90
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 407 433 265 6 0 94
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 407 592 310 11 0 89
0 0 0 902088 135560 4 62 0 0 0 0 0 0 0 0 0 404 464 230 86 7 6
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 353 208 100 0 0
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 336 220 100 0 0
0 0 0 902088 135560 4 53 0 0 0 0 0 0 0 0 0 406 588 279 25 3 72
0 0 0 902088 135560 8 115 0 0 0 0 0 0 0 0 0 406 616 269 50 8 42
0 0 0 902088 135560 4 61 0 0 0 0 0 0 0 0 0 406 580 253 71 4 25
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 447 247 77 0 23
0 0 0 902088 135560 4 61 0 0 0 0 0 0 0 0 0 407 597 263 49 2 48
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 402 336 204 99 1 0
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 405 545 284 42 1 57
0 0 0 902088 135560 4 61 0 0 0 0 0 0 0 0 0 407 637 276 59 3 37
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 331 206 100 0 0
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 327 212 96 4 0
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 402 324 204 100 0 0
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 347 221 100 0 0
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd dd f0 s2 in sy cs us sy id
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 393 235 43 1 56
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 407 495 241 3 0 97
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 404 437 226 0 0 100
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 405 413 229 1 0 99
0 0 0 902088 135560 0 0 0 0 0 0 0 0 0 0 0 403 307 236 0 0 100
Thanks!
next reply other threads:[~2004-07-27 5:29 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-07-27 5:29 David Combs [this message]
2004-07-27 12:28 ` garbage collecting! How to expand physical-mem used? Pascal Bourguignon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://www.gnu.org/software/emacs/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='ce4p80$epf$1@panix3.panix.com' \
--to=dkcombs@panix.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).