From: "Gerd Möllmann" <gerd.moellmann@gmail.com>
To: Pip Cet <pipcet@protonmail.com>
Cc: Ihor Radchenko <yantar92@posteo.net>,
Eli Zaretskii <eliz@gnu.org>,
emacs-devel@gnu.org, eller.helmut@gmail.com
Subject: Re: MPS: User GC customizations
Date: Thu, 04 Jul 2024 18:07:42 +0200 [thread overview]
Message-ID: <m234opywc1.fsf@pro2.fritz.box> (raw)
In-Reply-To: <PFMWRXvNaSlsK0cACOx2sOZB0-GvwYqIzZCBdja15-yWx0kWr7Ha-OiS9XJ5Ng8lBj-P5nWiheoMw87DoAs5iCtSeDmypeac18fgRowTuSc=@protonmail.com> (Pip Cet's message of "Thu, 04 Jul 2024 15:12:50 +0000")
Pip Cet <pipcet@protonmail.com> writes:
> I think we can just set flags for "called MPS" and "in a scan
> function" and look at them in the SIGPROF handler to distinguish the
> four cases?
Not sure. What if MPS calls these in its own thread? I guess that
wouldn't be so interesting for Ihor.
> My suspicion is that most problems are going to be due to large
> objects creating large segments which we have to scan completely, but
> that's a (micro-?) optimization for another day.
>
> Would it be okay to keep track of the largest object/pvec of each type
> in igc.el/Figc_info? I've got a patch here which does that, and I
> think the number is at least as interesting as the average :-)
Good idea! That first weill showing the number of objects and average
size in igc-stats.
>
>> > > What this variable does is give MPS notice that the client is currently
>> > > idle and it might be a good time to do some work.
>> >
>> > Without understanding what effect setting this variable has on the Emacs
>> > responsiveness, it does not seem very useful. (Exactly because MPS is
>> > concurrent)
>>
>> That was kind of my point. We can use idle time for using this variable
>> and see what effect it has, especially in interactive use. Sorry if that
>> wasn't clear. I'm personally trying 0.1 (100ms) here at the moment.
>
> I think it might very well have an effect, but being able to tune the
> MPS is important, too, both for debugging and to improve performance.
> And the documentation seems to be quite minimal.
>
>> > > > For example, my recent measurement of building agendas displayed 30% of
>> > > > the time spend in GC. (whatever this means in the context of our handling
>> > > > of SIGPRF)
>> > >
>> > > Exactly, whqt does it mean? And if we don't know, why is it an example
>> > > for anything?
>> >
>> > AFAIU, on master, SIGPROF handled while our vanilla GC is running
>> > will record it. In contrast, on scratch/igc, SIGPROF will put all the
>> > time when igc_busy_p () is non-nil into "GC".
>>
>> Right. And I wonder if that simply is because MPS is doing stuff in its
>> own thread.
>
> That (or another thread calling MPS to make an allocation) would definitely show up as a false positive.
>
>> > And igc_busy_p is not only non-nil when MPS is pausing Emacs to do its
>> > job, but also during object allocation. So, on master, profiler "GC"
>> > field records real GC pauses, while on scratch/igc "GC" field is GC
>> > pauses + new object allocation.
>>
>> The docs say
>>
>> -- C Function: *note mps_bool_t: 129. mps_arena_busy (mps_arena_t
>> arena)
>>
>> Return true if an *note arena: 16. is part of the way through
>> execution of an operation, false otherwise.
>>
>> ‘arena’ is the arena.
>>
>> Note: This function is intended to assist with debugging fatal
>> errors in the *note client program: d0. It is not expected to
>> be needed in normal use. If you find yourself wanting to use
>> this function other than in the use case described below,
>> there may be a better way to meet your requirements: please
>> *note contact us: d8.
>>
>> What "partly through an operation" means is anyone's guess at this
>> point. Someone would have to consult the sources. The docs don't say
>> what you are suggesting, from my POV.
>
> IIRC it just checks whether the arena lock is held, whenever that
> might be.
Good to know, thanks! That's what I was suspecting.
>
>> > My figure of 30% says that igc_busy_p () is for 30% of CPU time, which
>> > is a significant number. But it is not very useful unless we get some
>> > idea about which part of it is memory allocation and which part of it is
>> > MPS pausing all Emacs threads.
>> >
>> > Ideally, we should also have some way to get the allocation times on
>> > master. Then, we can compare them directly.
>>
>> Maybe it would be interesting what the meansurements look like on macOS,
>> where the check for igc_busy are not needed in the SIGPROF handler.
>
> How sure are we of that, by the way? My understanding is there are two
> ways signals can interfere with one another: SIGSEGV -> SIGPROF ->
> SIGSEGV (which wouldn't happen on macOS) and alloc -> SIGPROF ->
> SIGSEGV, which might.
I'm not an expert and so on, so just talking about my understanding:
Darwin is a kind of Mach descendant, and as such it uses Mach exceptions
for hardware faults. Among those is EXC_BAD_ACCESS which one gets for
protection faults. Exceptions are handled by installing a "port"
which runs in its own thread and which the OS invokes when the exception
occurs.
Exception handling is "synchronous", which I think means the OS suspends
the thread which caused the exception, invokes the port for doing what
it wants, and when done let's the original thread continue, if the port
wants that.
Signals on Darwin are something different and used for non-hardware
cases.
next prev parent reply other threads:[~2024-07-04 16:07 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-01 9:26 MPS: Crash when switching to buffer Ihor Radchenko
2024-07-01 12:04 ` Eli Zaretskii
2024-07-01 12:13 ` Ihor Radchenko
2024-07-01 12:46 ` Eli Zaretskii
2024-07-01 14:14 ` Pip Cet
2024-07-01 14:42 ` Gerd Möllmann
2024-07-02 0:22 ` Pip Cet
2024-07-02 4:04 ` Gerd Möllmann
2024-07-02 11:40 ` Ihor Radchenko
2024-07-04 10:31 ` Ihor Radchenko
2024-07-04 11:48 ` Gerd Möllmann
2024-07-04 12:02 ` MPS: User GC customizations (was: MPS: Crash when switching to buffer) Ihor Radchenko
2024-07-04 12:51 ` MPS: User GC customizations Gerd Möllmann
2024-07-04 13:20 ` Ihor Radchenko
2024-07-04 14:45 ` Gerd Möllmann
2024-07-04 15:12 ` Pip Cet
2024-07-04 16:07 ` Gerd Möllmann [this message]
2024-07-04 16:38 ` Ihor Radchenko
2024-07-04 17:02 ` Gerd Möllmann
2024-07-04 17:53 ` Eli Zaretskii
2024-07-04 18:18 ` Gerd Möllmann
2024-07-04 18:28 ` Ihor Radchenko
2024-07-04 18:32 ` Pip Cet
2024-07-04 18:43 ` Gerd Möllmann
2024-07-04 18:39 ` Eli Zaretskii
2024-07-04 18:48 ` Ihor Radchenko
2024-07-04 13:58 ` Eli Zaretskii
2024-07-04 14:30 ` Gerd Möllmann
2024-07-04 15:43 ` Eli Zaretskii
2024-07-04 15:48 ` Eli Zaretskii
2024-07-04 15:52 ` Pip Cet
2024-07-04 16:04 ` Eli Zaretskii
2024-07-04 17:01 ` Gerd Möllmann
2024-07-04 18:03 ` Eli Zaretskii
2024-07-04 18:28 ` Gerd Möllmann
2024-07-04 18:43 ` Eli Zaretskii
2024-07-04 19:09 ` Gerd Möllmann
2024-07-04 19:12 ` Eli Zaretskii
2024-07-04 16:38 ` Pip Cet
2024-07-04 17:06 ` Gerd Möllmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://www.gnu.org/software/emacs/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m234opywc1.fsf@pro2.fritz.box \
--to=gerd.moellmann@gmail.com \
--cc=eliz@gnu.org \
--cc=eller.helmut@gmail.com \
--cc=emacs-devel@gnu.org \
--cc=pipcet@protonmail.com \
--cc=yantar92@posteo.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.savannah.gnu.org/cgit/emacs.git
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).