From: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
To: <emacs-devel@gnu.org>
Subject: Understanding filter function calls
Date: Sun, 23 Jul 2023 22:46:09 -0700 [thread overview]
Message-ID: <87y1j5vp3y.fsf@gmail.com> (raw)
I'm testing some code written for the upcoming Org 9.7 for previewing
LaTeX fragments. Given a LaTeX environment (or many), the way the code
works is:
1. Gather fragments in the buffer
2. Create a TeX file containing this LaTeX
3. Run latex (TeX -> DVI)
4. Run dvisvgm in the LaTeX process sentinel
(DVI -> SVG or series of SVGs)
5. Update in-buffer previews as SVGs are generated through the dvisvgm
process' filter function.
We use a filter function for the dvisvgm process to incrementally update
previews as this is much faster on larger runs than waiting for the
process sentinel to run.
This is different from how LaTeX preview generation has worked in Org
mode so far. It's as asynchronous as can be, and somewhat similar to
how preview-latex (part of AucTeX works), if you're familiar with that.
The problem is that how long preview generation takes is significantly
different for different TeXLive versions (i.e. different LaTeX/dvisvgm
executables). For example, LaTeX preview generation in an Org file with
~600 fragments takes:
| | Preview generation time |
|--------------+-------------------------|
| TeXLive 2022 | 2.65 secs |
| TeXLive 2023 | 4.03 secs |
This is with identical code on the Emacs side of things.
This difference is NOT explainable as the newer versions of LaTeX or
dvisvgm taking longer. When benchmarked individually on the same TeX
file -- and without Emacs in the picture -- latex 2022 and latex 2023
(as I'll call them here) take near identical times, as do dvisvgm 2022
and dvisvgm 2023.
| | latex run | dvisvgm run |
|--------------+-----------------+--------------|
| TeXLive 2022 | 253.9 ± 10.6 ms | 1266 ± 41 ms |
| TeXLive 2023 | 258.9 ± 15.0 ms | 1298 ± 15 ms |
The stdout from latex and dvisvgm, which the sentinel/filter functions
parse, are near identical, and the SVG images are the same sizes. I've
controlled every variable I could think to control.
- Same Org file.
- Same org-latex-preview customizations/settings.
- Same Emacs buffers open, etc.
- Run `garbage-collect' immediately before benchmarking.
- Same background system processes.
So why is the TeXLive 2023 run so much slower in Emacs?
After profiling with elp and generating a flamegraph, this is the result
(png image): https://abode.karthinks.com/share/olp-timing-chart.png
You can ignore the disproportionately long duration function calls,
this is related to GC, and one additional GC phase during the slower
(TeXLive 2023) case cannot explain the discrepancy. Further, sometimes
there are more GC phases in the TeXLive 2022 run, but it's still
significantly faster.
The overall synchronous run times of the code in Emacs for TeXLive 2022
and 2023 are similar (0.77 vs 0.84 s). However the dvisvgm filter
function is called quite differently in the two cases.
| | call count | total time | average time |
|--------------+------------+------------+--------------|
| TeXLive 2022 | 25 | 0.77 secs | 31 ms |
| TeXLive 2023 | 39 | 0.84 secs | 22 ms |
Even though the overall time spent in the filter function is about the
same, the TeXLive 2023 dvisvgm run
- calls the filter function 39 times instead of 25, where
- each run of the filter function processes less stdout,
- each run of the filter function takes less time to run,
- and crucially, these filter function calls are spaced slightly further
apart in time
The net result of which is that the overall preview generation process
takes much longer to complete.
I've tested this multiple times over runs (Org/TeX files) of different
sizes, and the pattern is the same.
My questions are thus:
1. If the latex/dvisvgm executables from TeXLive 2022/2023 take about
the same time, and the stdout (that the filter function sees) is
identical, why is Emacs' filter function call behavior different?
2. Is there anything I can do to obtain time signature/behavior like
with TeXLive 2022 (see above link)? I would really like to speed up
preview generation.
-Karthik
next reply other threads:[~2023-07-24 5:46 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-24 5:46 Karthik Chikmagalur [this message]
2023-07-24 20:57 ` Understanding filter function calls Jens Schmidt
2023-07-27 21:08 ` Karthik Chikmagalur
2023-07-27 21:44 ` Karthik Chikmagalur
2023-07-28 5:47 ` Eli Zaretskii
2023-07-28 5:44 ` Eli Zaretskii
2023-07-28 21:42 ` Karthik Chikmagalur
2023-07-29 6:02 ` Eli Zaretskii
2023-07-29 22:16 ` Karthik Chikmagalur
2023-07-30 5:14 ` Eli Zaretskii
2023-07-28 7:54 ` Ihor Radchenko
2023-07-28 21:51 ` Karthik Chikmagalur
2023-07-29 6:04 ` Eli Zaretskii
-- strict thread matches above, loose matches on Subject: below --
2024-04-18 3:52 Karthik Chikmagalur
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87y1j5vp3y.fsf@gmail.com \
--to=karthikchikmagalur@gmail.com \
--cc=emacs-devel@gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/emacs.git
https://git.savannah.gnu.org/cgit/emacs/org-mode.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.