* Re: Shrinking the C core
2023-08-10 1:19 ` Eric S. Raymond
@ 2023-08-10 1:47 ` Christopher Dimech
2023-08-10 1:58 ` Eric Frederickson
` (3 subsequent siblings)
4 siblings, 0 replies; 247+ messages in thread
From: Christopher Dimech @ 2023-08-10 1:47 UTC (permalink / raw)
To: esr; +Cc: Po Lu, emacs-devel
> Sent: Thursday, August 10, 2023 at 1:19 PM
> From: "Eric S. Raymond" <esr@thyrsus.com>
> To: "Po Lu" <luangruo@yahoo.com>
> Cc: emacs-devel@gnu.org
> Subject: Re: Shrinking the C core
>
> Po Lu <luangruo@yahoo.com>:
> > "Eric S. Raymond" <esr@thyrsus.com> writes:
> >
> > > When I first worked on Emacs code in the 1980s Lisp was already fast
> > > enough, and machine speeds have gone up by something like 10^3 since.
> > > I plain don't believe the "slower" part can be an issue on modern
> > > hardware, not even on tiny SBCs.
> >
> > Can you promise the same, if your changes are not restricted to one or
> > two functions in fileio.c, but instead pervade throughout C source?
>
> Yes, in fact, I can. Because if by some miracle we were able to
> instantly rewrite the entirety of Emacs in Python (which I'm not
> advocating, I chose it because it's the slowest of the major modern
> scripting languages) basic considerations of clocks per second would
> predict it to run a *dead minimum* of two orders of magnitude faster
> than the Emacs of, say, 1990.
>
> And 1990 Emacs was already way fast enough for the human eye and
> brain, which can't even register interface lag of less than 0.17
> seconds (look up the story of Jef Raskin and how he exploited this
> psychophysical fact in the design of the Canon Cat sometime; it's very
> instructive). The human auditory system can perceive finer timeslices,
> down to about 0.02s in skilled musicians, but we're not using elisp
> for audio signal processing.
>
> If you take away nothing else from this conversation, at least get it
> through your head that "more Lisp might make Emacs too slow" is a
> deeply, *deeply* silly idea. It's 2023 and the only ways you can make
> a user-facing program slow enough for response lag to be noticeable
> are disk latency on spinning rust, network round-trips, or operations
> with a superlinear big-O in critical paths. Mere interpretive overhead
> won't do it.
>
> > Finally, you haven't addressed the remainder of the reasons I itemized.
>
> They were too obvious, describing problems that competent software
> engineers know how to prevent or hedge against, and you addressed me
> as though I were a n00b that just fell off a cabbage truck.
It's a habit of his. Can't fix without blowing his fuse.
> My earliest contributions to Emacs were done so long ago that they
> predated the systematic Changelog convention; have you heard the
> expression "teaching your grandmother to suck eggs"? My patience for
> that sort of thing is limited.
> --
> <a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
>
>
>
>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 1:19 ` Eric S. Raymond
2023-08-10 1:47 ` Christopher Dimech
@ 2023-08-10 1:58 ` Eric Frederickson
2023-08-10 2:07 ` Sam James
2023-08-10 2:28 ` Po Lu
` (2 subsequent siblings)
4 siblings, 1 reply; 247+ messages in thread
From: Eric Frederickson @ 2023-08-10 1:58 UTC (permalink / raw)
To: esr; +Cc: emacs-devel
"Eric S. Raymond" <esr@thyrsus.com> writes:
> Po Lu <luangruo@yahoo.com>:
>> "Eric S. Raymond" <esr@thyrsus.com> writes:
>>
>> > When I first worked on Emacs code in the 1980s Lisp was already fast
>> > enough, and machine speeds have gone up by something like 10^3 since.
>> > I plain don't believe the "slower" part can be an issue on modern
>> > hardware, not even on tiny SBCs.
>>
>> Can you promise the same, if your changes are not restricted to one or
>> two functions in fileio.c, but instead pervade throughout C source?
>
> Yes, in fact, I can. Because if by some miracle we were able to
> instantly rewrite the entirety of Emacs in Python (which I'm not
> advocating, I chose it because it's the slowest of the major modern
> scripting languages) basic considerations of clocks per second would
> predict it to run a *dead minimum* of two orders of magnitude faster
> than the Emacs of, say, 1990.
>
> And 1990 Emacs was already way fast enough for the human eye and
> brain, which can't even register interface lag of less than 0.17
> seconds (look up the story of Jef Raskin and how he exploited this
> psychophysical fact in the design of the Canon Cat sometime; it's very
> instructive). The human auditory system can perceive finer timeslices,
> down to about 0.02s in skilled musicians, but we're not using elisp
> for audio signal processing.
>
> If you take away nothing else from this conversation, at least get it
> through your head that "more Lisp might make Emacs too slow" is a
> deeply, *deeply* silly idea. It's 2023 and the only ways you can make
> a user-facing program slow enough for response lag to be noticeable
> are disk latency on spinning rust, network round-trips, or operations
> with a superlinear big-O in critical paths. Mere interpretive overhead
> won't do it.
>
>> Finally, you haven't addressed the remainder of the reasons I itemized.
>
> They were too obvious, describing problems that competent software
> engineers know how to prevent or hedge against, and you addressed me
> as though I were a n00b that just fell off a cabbage truck. My
> earliest contributions to Emacs were done so long ago that they
> predated the systematic Changelog convention; have you heard the
> expression "teaching your grandmother to suck eggs"? My patience for
> that sort of thing is limited.
Sorry to jump in, but I can't resist.
You're critical of others for not showing you deep respect as a Developer of the
Highest Caliber, and yet you act with the absurd intention of "sneaking up on"
changes? And then refuse to reveal your apparently grand intentions underlying
this sleight-of-hand project?
Emacs is a program that I and many thousands of others rely on every day to get
work done; please don't pollute its development ecosystem with that utter
nonsense.
- Eric Frederickson
> --
> <a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 1:58 ` Eric Frederickson
@ 2023-08-10 2:07 ` Sam James
2023-08-10 2:44 ` Po Lu
` (2 more replies)
0 siblings, 3 replies; 247+ messages in thread
From: Sam James @ 2023-08-10 2:07 UTC (permalink / raw)
To: Eric Frederickson; +Cc: esr, emacs-devel
Eric Frederickson <ericfrederickson68@gmail.com> writes:
> "Eric S. Raymond" <esr@thyrsus.com> writes:
>
>> Po Lu <luangruo@yahoo.com>:
>>> "Eric S. Raymond" <esr@thyrsus.com> writes:
>>>
>>> > When I first worked on Emacs code in the 1980s Lisp was already fast
>>> > enough, and machine speeds have gone up by something like 10^3 since.
>>> > I plain don't believe the "slower" part can be an issue on modern
>>> > hardware, not even on tiny SBCs.
>>>
>>> Can you promise the same, if your changes are not restricted to one or
>>> two functions in fileio.c, but instead pervade throughout C source?
>>
>> Yes, in fact, I can. Because if by some miracle we were able to
>> instantly rewrite the entirety of Emacs in Python (which I'm not
>> advocating, I chose it because it's the slowest of the major modern
>> scripting languages) basic considerations of clocks per second would
>> predict it to run a *dead minimum* of two orders of magnitude faster
>> than the Emacs of, say, 1990.
>>
>> And 1990 Emacs was already way fast enough for the human eye and
>> brain, which can't even register interface lag of less than 0.17
>> seconds (look up the story of Jef Raskin and how he exploited this
>> psychophysical fact in the design of the Canon Cat sometime; it's very
>> instructive). The human auditory system can perceive finer timeslices,
>> down to about 0.02s in skilled musicians, but we're not using elisp
>> for audio signal processing.
>>
>> If you take away nothing else from this conversation, at least get it
>> through your head that "more Lisp might make Emacs too slow" is a
>> deeply, *deeply* silly idea. It's 2023 and the only ways you can make
>> a user-facing program slow enough for response lag to be noticeable
>> are disk latency on spinning rust, network round-trips, or operations
>> with a superlinear big-O in critical paths. Mere interpretive overhead
>> won't do it.
>>
>>> Finally, you haven't addressed the remainder of the reasons I itemized.
>>
>> They were too obvious, describing problems that competent software
>> engineers know how to prevent or hedge against, and you addressed me
>> as though I were a n00b that just fell off a cabbage truck. My
>> earliest contributions to Emacs were done so long ago that they
>> predated the systematic Changelog convention; have you heard the
>> expression "teaching your grandmother to suck eggs"? My patience for
>> that sort of thing is limited.
>
> Sorry to jump in, but I can't resist.
>
> You're critical of others for not showing you deep respect as a Developer of the
> Highest Caliber, and yet you act with the absurd intention of "sneaking up on"
> changes? And then refuse to reveal your apparently grand intentions underlying
> this sleight-of-hand project?
While not being up front about the changes is of debatable wisdom, I
didn't find it particularly alarming given I at least have always
understood the aim to be to have the C core as small as possible anyway.
I presume esr was under the same impression and hence even if he never
went through with his big plan, it'd be some easy wins from his perspective.
Laying the groundwork for something that may or may not come off with
independent changes one believes are worthwhile isn't underhanded if
it's just a pipedream in the back of your head but you think the changes
are good in isolation.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 2:07 ` Sam James
@ 2023-08-10 2:44 ` Po Lu
2023-08-10 6:48 ` Eli Zaretskii
2023-08-10 21:19 ` Eric S. Raymond
2 siblings, 0 replies; 247+ messages in thread
From: Po Lu @ 2023-08-10 2:44 UTC (permalink / raw)
To: Sam James; +Cc: Eric Frederickson, esr, emacs-devel
Sam James <sam@gentoo.org> writes:
> While not being up front about the changes is of debatable wisdom, I
> didn't find it particularly alarming given I at least have always
> understood the aim to be to have the C core as small as possible anyway.
I don't think that's true. The C core can evolve as much as it wants,
with explicit action taken to reduce it if necessary, or if doing so
assists flexibility.
But that's besides the point. Transcribing venerable and complex code
like fileio.c wholesale is out of the question, at least absent very
solid justifications attested by concrete plans to make use of the
changes.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 2:07 ` Sam James
2023-08-10 2:44 ` Po Lu
@ 2023-08-10 6:48 ` Eli Zaretskii
2023-08-10 21:21 ` Eric S. Raymond
2023-08-10 21:19 ` Eric S. Raymond
2 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-10 6:48 UTC (permalink / raw)
To: Sam James; +Cc: ericfrederickson68, esr, emacs-devel
> From: Sam James <sam@gentoo.org>
> Cc: esr@thyrsus.com, emacs-devel@gnu.org
> Date: Thu, 10 Aug 2023 03:07:58 +0100
>
> While not being up front about the changes is of debatable wisdom, I
> didn't find it particularly alarming given I at least have always
> understood the aim to be to have the C core as small as possible anyway.
There are no such goals, no, not where I'm standing. We do prefer to
implement new features and extensions in Lisp if they can reasonably
be implemented in Lisp, but rewriting existing C code in Lisp is not a
goal in and off itself.
> Laying the groundwork for something that may or may not come off with
> independent changes one believes are worthwhile isn't underhanded if
> it's just a pipedream in the back of your head but you think the changes
> are good in isolation.
Assuming I understand what you mean by that: we've been burnt in the
past by people who started working on some grand changes, made
package-wide preparatory modifications, and then left to greener
pasture without arriving at any point where those changes have any
usefulness. That's net loss: the code is less clear, gets in the way
of the muscle memory of veteran Emacs hackers (who used to know by
heart where some particular piece of code lives and how it works), and
brings exactly zero advantages to justify these downsides. So now I'd
prefer to start such changes only if there's a more-or-less clear and
agreed-upon plan for the new features, and generally do that on a
feature branch, so that we could avoid changes on master before they
are really useful and agreed-upon.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 6:48 ` Eli Zaretskii
@ 2023-08-10 21:21 ` Eric S. Raymond
0 siblings, 0 replies; 247+ messages in thread
From: Eric S. Raymond @ 2023-08-10 21:21 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: Sam James, ericfrederickson68, emacs-devel
Eli Zaretskii <eliz@gnu.org>:
> Assuming I understand what you mean by that: we've been burnt in the
> past by people who started working on some grand changes, made
> package-wide preparatory modifications, and then left to greener
> pasture without arriving at any point where those changes have any
> usefulness.
I completely agree that this is a failure mode to be avoided. Preparatory
changes have to be wortwhile in themselves.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 2:07 ` Sam James
2023-08-10 2:44 ` Po Lu
2023-08-10 6:48 ` Eli Zaretskii
@ 2023-08-10 21:19 ` Eric S. Raymond
2023-08-10 21:56 ` Emanuel Berg
2023-08-11 5:46 ` Eli Zaretskii
2 siblings, 2 replies; 247+ messages in thread
From: Eric S. Raymond @ 2023-08-10 21:19 UTC (permalink / raw)
To: Sam James; +Cc: Eric Frederickson, emacs-devel
Sam James <sam@gentoo.org>:
> I presume esr was under the same impression and hence even if he never
> went through with his big plan, it'd be some easy wins from his perspective.
>
> Laying the groundwork for something that may or may not come off with
> independent changes one believes are worthwhile isn't underhanded if
> it's just a pipedream in the back of your head but you think the changes
> are good in isolation.
Exactly so. Experience has taught me the value of sneaking up on big
changes in such a way that if you have to bail out midway through the
grand plan you have still added value. And reducing the maintainence
complexity of the core is a good thing in itself.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 21:19 ` Eric S. Raymond
@ 2023-08-10 21:56 ` Emanuel Berg
2023-08-11 5:46 ` Eli Zaretskii
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-10 21:56 UTC (permalink / raw)
To: emacs-devel
Eric S. Raymond wrote:
> Exactly so. Experience has taught me the value of sneaking
> up on big changes in such a way that if you have to bail out
> midway through the grand plan you have still added value.
And even when you complete it, the added value often proves to
be the real gain.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 21:19 ` Eric S. Raymond
2023-08-10 21:56 ` Emanuel Berg
@ 2023-08-11 5:46 ` Eli Zaretskii
2023-08-11 8:45 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-11 5:46 UTC (permalink / raw)
To: esr; +Cc: sam, ericfrederickson68, emacs-devel
> Date: Thu, 10 Aug 2023 17:19:22 -0400
> From: "Eric S. Raymond" <esr@thyrsus.com>
> Cc: Eric Frederickson <ericfrederickson68@gmail.com>, emacs-devel@gnu.org
>
> Sam James <sam@gentoo.org>:
> > I presume esr was under the same impression and hence even if he never
> > went through with his big plan, it'd be some easy wins from his perspective.
> >
> > Laying the groundwork for something that may or may not come off with
> > independent changes one believes are worthwhile isn't underhanded if
> > it's just a pipedream in the back of your head but you think the changes
> > are good in isolation.
>
> Exactly so. Experience has taught me the value of sneaking up on big
> changes in such a way that if you have to bail out midway through the
> grand plan you have still added value.
If each individual change has clear added value, yes. The problem is,
usually they don't, not IME with this project. I have too many gray
hair with this false assumption.
> And reducing the maintainence complexity of the core is a good thing
> in itself.
There's no such thing as a separate "maintainence complexity of the
core" in Emacs (assuming by "core" you allude to the C code). The
routine maintenance in Emacs includes both the C code and the Lisp
code that is part of the low-level infrastructure. files.el,
simple.el, subr.el, and many other *.el files (basically, everything
that's loaded in loadup.el, and then some, like dired.el) -- all those
constitute the core of Emacs, and are under constant supervision and
attention of the maintainers and core developers.
Moving old and well-tested C code out to Lisp usually _increases_
maintenance burden, because the old code in most cases needs _zero_
maintenance nowadays. Thus, the condition of the changes to have
added value is not usually fulfilled, and we need "other
considerations" to justify the costs.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-11 5:46 ` Eli Zaretskii
@ 2023-08-11 8:45 ` Emanuel Berg
2023-08-11 11:24 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 8:45 UTC (permalink / raw)
To: emacs-devel
Eli Zaretskii wrote:
> Moving old and well-tested C code out to Lisp usually
> _increases_ maintenance burden, because the old code in most
> cases needs _zero_ maintenance nowadays.
One could maybe identify certain slow spots in Elisp and see
if there would be a point moving them to C.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-11 8:45 ` Emanuel Berg
@ 2023-08-11 11:24 ` Eli Zaretskii
2023-08-11 12:12 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-11 11:24 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
> From: Emanuel Berg <incal@dataswamp.org>
> Date: Fri, 11 Aug 2023 10:45:55 +0200
>
> Eli Zaretskii wrote:
>
> > Moving old and well-tested C code out to Lisp usually
> > _increases_ maintenance burden, because the old code in most
> > cases needs _zero_ maintenance nowadays.
>
> One could maybe identify certain slow spots in Elisp and see
> if there would be a point moving them to C.
Yes, and we are doing that.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-11 11:24 ` Eli Zaretskii
@ 2023-08-11 12:12 ` Emanuel Berg
2023-08-11 13:16 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 12:12 UTC (permalink / raw)
To: emacs-devel
Eli Zaretskii wrote:
>>> Moving old and well-tested C code out to Lisp usually
>>> _increases_ maintenance burden, because the old code in
>>> most cases needs _zero_ maintenance nowadays.
>>
>> One could maybe identify certain slow spots in Elisp and
>> see if there would be a point moving them to C.
>
> Yes, and we are doing that.
Okay, what spots are those, and how do you find them?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-11 12:12 ` Emanuel Berg
@ 2023-08-11 13:16 ` Eli Zaretskii
0 siblings, 0 replies; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-11 13:16 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
> From: Emanuel Berg <incal@dataswamp.org>
> Date: Fri, 11 Aug 2023 14:12:12 +0200
>
> Eli Zaretskii wrote:
>
> >>> Moving old and well-tested C code out to Lisp usually
> >>> _increases_ maintenance burden, because the old code in
> >>> most cases needs _zero_ maintenance nowadays.
> >>
> >> One could maybe identify certain slow spots in Elisp and
> >> see if there would be a point moving them to C.
> >
> > Yes, and we are doing that.
>
> Okay, what spots are those, and how do you find them?
We usually find them by profiling, or by finding we need
functionalities that would be hard or impossible to implement in Lisp.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 1:19 ` Eric S. Raymond
2023-08-10 1:47 ` Christopher Dimech
2023-08-10 1:58 ` Eric Frederickson
@ 2023-08-10 2:28 ` Po Lu
2023-08-10 4:15 ` Christopher Dimech
2023-08-10 7:44 ` Eli Zaretskii
2023-08-10 11:28 ` Dmitry Gutov
4 siblings, 1 reply; 247+ messages in thread
From: Po Lu @ 2023-08-10 2:28 UTC (permalink / raw)
To: Eric S. Raymond; +Cc: emacs-devel
"Eric S. Raymond" <esr@thyrsus.com> writes:
> Yes, in fact, I can. Because if by some miracle we were able to
> instantly rewrite the entirety of Emacs in Python (which I'm not
> advocating, I chose it because it's the slowest of the major modern
> scripting languages) basic considerations of clocks per second would
> predict it to run a *dead minimum* of two orders of magnitude faster
> than the Emacs of, say, 1990.
The important measure is how much slower it will be compared to the
Emacs of today. The Emacs of yesteryear is not relevant at all: simply
grab a copy of Emacs 23.1, and compare the speed of CC Mode font lock
there (on period hardware) to the speed of CC Mode font lock on
contemporary hardware today.
> They were too obvious, describing problems that competent software
> engineers know how to prevent or hedge against, and you addressed me
> as though I were a n00b that just fell off a cabbage truck. My
Projecting much?
I raised those concerns because I have seen them and suffered their
consequences. There is no place for hubris: analogous changes were also
performed by equally skilled and experienced Emacs developers, only for
issues to be uncovered years in the future. (For example, when a call
to `with-temp-buffer' was introduced to loadup.)
How many times must we suffer the consequences of indiscriminate
refactoring before we will recognize the obvious conclusion that
code which doesn't need to change, shouldn't?
> earliest contributions to Emacs were done so long ago that they
> predated the systematic Changelog convention; have you heard the
> expression "teaching your grandmother to suck eggs"? My patience for
> that sort of thing is limited.
If that is the attitude by which you treat other Emacs developers, then
from my POV this debate is over. We cannot work with you, when you
dismiss real-world concerns that have been seen innumerable times in
practice, based on a conceited view of your own skill.
Which, BTW, has already broken the build once. And the jury is still
out on whether your earlier change needs to be reverted, since Andrea
has yet to ascertain if it will lead to negative consequences for native
compilation.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 2:28 ` Po Lu
@ 2023-08-10 4:15 ` Christopher Dimech
0 siblings, 0 replies; 247+ messages in thread
From: Christopher Dimech @ 2023-08-10 4:15 UTC (permalink / raw)
To: Po Lu; +Cc: Eric S. Raymond, emacs-devel
> Sent: Thursday, August 10, 2023 at 2:28 PM
> From: "Po Lu" <luangruo@yahoo.com>
> To: "Eric S. Raymond" <esr@thyrsus.com>
> Cc: emacs-devel@gnu.org
> Subject: Re: Shrinking the C core
>
> "Eric S. Raymond" <esr@thyrsus.com> writes:
>
> > Yes, in fact, I can. Because if by some miracle we were able to
> > instantly rewrite the entirety of Emacs in Python (which I'm not
> > advocating, I chose it because it's the slowest of the major modern
> > scripting languages) basic considerations of clocks per second would
> > predict it to run a *dead minimum* of two orders of magnitude faster
> > than the Emacs of, say, 1990.
>
> The important measure is how much slower it will be compared to the
> Emacs of today. The Emacs of yesteryear is not relevant at all: simply
> grab a copy of Emacs 23.1, and compare the speed of CC Mode font lock
> there (on period hardware) to the speed of CC Mode font lock on
> contemporary hardware today.
>
> > They were too obvious, describing problems that competent software
> > engineers know how to prevent or hedge against, and you addressed me
> > as though I were a n00b that just fell off a cabbage truck. My
>
> Projecting much?
>
> I raised those concerns because I have seen them and suffered their
> consequences. There is no place for hubris: analogous changes were also
> performed by equally skilled and experienced Emacs developers, only for
> issues to be uncovered years in the future. (For example, when a call
> to `with-temp-buffer' was introduced to loadup.)
>
> How many times must we suffer the consequences of indiscriminate
> refactoring before we will recognize the obvious conclusion that
> code which doesn't need to change, shouldn't?
At one time I proposed to have an basic emacs version that would not
need changes. Meaning, no bugs recognised, but no new features added.
An emacs project that is complete, with no more changes done. At a level
that is manageable for one person.
> > earliest contributions to Emacs were done so long ago that they
> > predated the systematic Changelog convention; have you heard the
> > expression "teaching your grandmother to suck eggs"? My patience for
> > that sort of thing is limited.
>
> If that is the attitude by which you treat other Emacs developers, then
> from my POV this debate is over. We cannot work with you, when you
> dismiss real-world concerns that have been seen innumerable times in
> practice, based on a conceited view of your own skill.
>
> Which, BTW, has already broken the build once. And the jury is still
> out on whether your earlier change needs to be reverted, since Andrea
> has yet to ascertain if it will lead to negative consequences for native
> compilation.
>
>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 1:19 ` Eric S. Raymond
` (2 preceding siblings ...)
2023-08-10 2:28 ` Po Lu
@ 2023-08-10 7:44 ` Eli Zaretskii
2023-08-10 21:54 ` Emanuel Berg
2023-08-10 23:49 ` Shrinking the C core Eric S. Raymond
2023-08-10 11:28 ` Dmitry Gutov
4 siblings, 2 replies; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-10 7:44 UTC (permalink / raw)
To: esr; +Cc: luangruo, emacs-devel
> Date: Wed, 9 Aug 2023 21:19:11 -0400
> From: "Eric S. Raymond" <esr@thyrsus.com>
> Cc: emacs-devel@gnu.org
>
> Po Lu <luangruo@yahoo.com>:
> > "Eric S. Raymond" <esr@thyrsus.com> writes:
> >
> > > When I first worked on Emacs code in the 1980s Lisp was already fast
> > > enough, and machine speeds have gone up by something like 10^3 since.
> > > I plain don't believe the "slower" part can be an issue on modern
> > > hardware, not even on tiny SBCs.
> >
> > Can you promise the same, if your changes are not restricted to one or
> > two functions in fileio.c, but instead pervade throughout C source?
>
> Yes, in fact, I can. Because if by some miracle we were able to
> instantly rewrite the entirety of Emacs in Python (which I'm not
> advocating, I chose it because it's the slowest of the major modern
> scripting languages) basic considerations of clocks per second would
> predict it to run a *dead minimum* of two orders of magnitude faster
> than the Emacs of, say, 1990.
>
> And 1990 Emacs was already way fast enough for the human eye and
> brain, which can't even register interface lag of less than 0.17
> seconds (look up the story of Jef Raskin and how he exploited this
> psychophysical fact in the design of the Canon Cat sometime; it's very
> instructive). The human auditory system can perceive finer timeslices,
> down to about 0.02s in skilled musicians, but we're not using elisp
> for audio signal processing.
This kind of argument is inherently flawed: it's true that today's
machines are much faster than those in, say, 1990, but Emacs nowadays
demands much more horsepower from the CPU than it did back then.
What's more, Emacs is still a single-threaded Lisp machine, although
in the last 10 years CPU power develops more and more in the direction
of multiple cores and execution units, with single execution units
being basically as fast (or as slow) today as they were a decade ago.
And if these theoretical arguments don't convince you, then there are
facts: the Emacs display engine, for example, was completely rewritten
since the 1990s, and is significantly more expensive than the old one
(because it lifts several of the gravest limitations of the old
redisplay). Similarly with some other core parts and internals.
We are trying to make Lisp programs faster all the time, precisely
because users do complain about annoying delays and slowness. Various
optimizations in the byte-compiler and the whole native-compilation
feature are parts of this effort, and are another evidence that the
performance concerns are not illusory in Emacs. And we are still not
there yet: people still do complain from time to time, and not always
because someone selected a sub-optimal algorithm where better ones
exist.
The slowdown caused by moving one primitive to Lisp might be
insignificant, but these slowdowns add up and eventually do show in
user-experience reports. Rewriting code in Lisp also increases the GC
pressure, and GC cycles are known as one of the significant causes of
slow performance in quite a few cases. We are currently tracking the
GC performance (see the emacs-gc-stats@gnu.org mailing list) for that
reason, in the hope that we can modify GC and/or its thresholds to
improve performance.
> If you take away nothing else from this conversation, at least get it
> through your head that "more Lisp might make Emacs too slow" is a
> deeply, *deeply* silly idea. It's 2023 and the only ways you can make
> a user-facing program slow enough for response lag to be noticeable
> are disk latency on spinning rust, network round-trips, or operations
> with a superlinear big-O in critical paths. Mere interpretive overhead
> won't do it.
We found this conclusion to be false in practice, at least in Emacs
practice.
> > Finally, you haven't addressed the remainder of the reasons I itemized.
>
> They were too obvious, describing problems that competent software
> engineers know how to prevent or hedge against, and you addressed me
> as though I were a n00b that just fell off a cabbage truck. My
> earliest contributions to Emacs were done so long ago that they
> predated the systematic Changelog convention; have you heard the
> expression "teaching your grandmother to suck eggs"? My patience for
> that sort of thing is limited.
Please be more patient, and please consider what others here say to be
mostly in good-faith and based on non-trivial experience. If
something in what others here say sounds like an offense to your
intelligence, it is most probably a misunderstanding: for most people
here English is not their first language, so don't expect them to
always be able to find the best words to express what they want to
say.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 7:44 ` Eli Zaretskii
@ 2023-08-10 21:54 ` Emanuel Berg
2023-08-11 10:27 ` Bignum performance (was: Shrinking the C core) Ihor Radchenko
2023-08-10 23:49 ` Shrinking the C core Eric S. Raymond
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-10 21:54 UTC (permalink / raw)
To: emacs-devel
Eli Zaretskii wrote:
> We are trying to make Lisp programs faster all the time,
> precisely because users do complain about annoying delays
> and slowness. Various optimizations in the byte-compiler and
> the whole native-compilation feature are parts of this
> effort
It's very fast with that, we should encourage more people do
use native-compilation.
>> If you take away nothing else from this conversation, at least get it
>> through your head that "more Lisp might make Emacs too slow" is a
>> deeply, *deeply* silly idea. It's 2023 and the only ways you can make
>> a user-facing program slow enough for response lag to be noticeable
>> are disk latency on spinning rust, network round-trips, or operations
>> with a superlinear big-O in critical paths. Mere interpretive overhead
>> won't do it.
>
> We found this conclusion to be false in practice, at least in Emacs
> practice.
In theory Lisp can be as fast as any other language but in
practice it is not the case with Elisp and Emacs at least.
Here is a n experiment with stats how Emacs/Elisp compares
to SBCL/CL, for this particular one it shows that Elisp, even
natively compiled, is still +78875% slower than Common Lisp.
;;; -*- lexical-binding: t -*-
;;
;; this file:
;; https://dataswamp.org/~incal/emacs-init/fib.el
;;
;; the CL:
;; https://dataswamp.org/~incal/cl/fib.cl
;;
;; code from:
;; elisp-benchmarks-1.14
;;
;; commands: [results]
;; $ emacs -Q -batch -l fib.el [8.660 s]
;; $ emacs -Q -batch -l fib.elc [3.386 s]
;; $ emacs -Q -batch -l fib-54a44480-bad305eb.eln [3.159 s]
;; $ sbcl -l fib.cl [0.004 s]
;;
;; (stats)
;; plain -> byte: +156%
;; plain -> native: +174%
;; plain -> sbcl: +216400%
;;
;; byte -> native: +7%
;; byte -> sbcl: +84550%
;;
;; native -> sbcl: +78875%
(require 'cl-lib)
(defun compare-table (l)
(cl-loop for (ni ti) in l
with first = t
do (setq first t)
(cl-loop for (nj tj) in l
do (when first
(insert "\n")
(setq first nil))
(unless (string= ni nj)
(let ((imp (* (- (/ ti tj) 1.0) 100)))
(when (< 0 imp)
(insert
(format ";; %s -> %s: %+.0f%%\n" ni nj imp) )))))))
(defun stats ()
(let ((p '("plain" 8.660))
(b '("byte" 3.386))
(n '("native" 3.159))
(s '("sbcl" 0.004)) )
(compare-table (list p b n s)) ))
(defun fib (reps num)
(let ((z 0))
(dotimes (_ reps)
(let ((p1 1)
(p2 1))
(dotimes (_ (- num 2))
(setf z (+ p1 p2)
p2 p1
p1 z))))
z))
(let ((beg (float-time)))
(fib 10000 1000)
(message "%.3f s" (- (float-time) beg)) )
;; (shell-command "emacs -Q -batch -l \"~/.emacs.d/emacs-init/fib.el\"")
;; (shell-command "emacs -Q -batch -l \"~/.emacs.d/emacs-init/fib.elc\"")
;; (shell-command "emacs -Q -batch -l \"~/.emacs.d/eln-cache/30.0.50-3b889b4a/fib-54a44480-8bbda87b.eln\"")
(provide 'fib)
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Bignum performance (was: Shrinking the C core)
2023-08-10 21:54 ` Emanuel Berg
@ 2023-08-11 10:27 ` Ihor Radchenko
2023-08-11 12:10 ` Emanuel Berg
2023-08-11 14:14 ` Mattias Engdegård
0 siblings, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-11 10:27 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> In theory Lisp can be as fast as any other language but in
> practice it is not the case with Elisp and Emacs at least.
>
> Here is a n experiment with stats how Emacs/Elisp compares
> to SBCL/CL, for this particular one it shows that Elisp, even
> natively compiled, is still +78875% slower than Common Lisp.
>
> ...
> (defun fib (reps num)
> (let ((z 0))
> (dotimes (_ reps)
> (let ((p1 1)
> (p2 1))
> (dotimes (_ (- num 2))
> (setf z (+ p1 p2)
> p2 p1
> p1 z))))
> z))
>
> (let ((beg (float-time)))
> (fib 10000 1000)
> (message "%.3f s" (- (float-time) beg)) )
Most of the time is spent in (1) GC; (2) Creating bigint:
perf record emacs -Q -batch -l /tmp/fib.eln
perf report:
Creating bignums:
40.95% emacs emacs [.] allocate_vectorlike
GC:
20.21% emacs emacs [.] process_mark_stack
3.41% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
GC:
3.21% emacs emacs [.] mark_char_table
2.82% emacs emacs [.] pdumper_marked_p_impl
2.23% emacs libc.so.6 [.] 0x0000000000090076
1.78% emacs libgmp.so.10.5.0 [.] __gmpz_add
1.71% emacs emacs [.] pdumper_set_marked_impl
1.59% emacs emacs [.] arith_driver
1.31% emacs libc.so.6 [.] malloc
GC:
1.15% emacs emacs [.] sweep_vectors
1.03% emacs libgmp.so.10.5.0 [.] __gmpn_add_n_coreisbr
0.88% emacs libc.so.6 [.] cfree
0.87% emacs fib.eln [.] F666962_fib_0
0.85% emacs emacs [.] check_number_coerce_marker
0.80% emacs libc.so.6 [.] 0x0000000000091043
0.74% emacs emacs [.] allocate_pseudovector
0.65% emacs emacs [.] Flss
0.57% emacs libgmp.so.10.5.0 [.] __gmpz_realloc
0.56% emacs emacs [.] make_bignum_bits
My conclusion from this is that big number implementation is not
optimal. Mostly because it does not reuse the existing bignum objects
and always create new ones - every single time we perform an arithmetic
operation.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Bignum performance (was: Shrinking the C core)
2023-08-11 10:27 ` Bignum performance (was: Shrinking the C core) Ihor Radchenko
@ 2023-08-11 12:10 ` Emanuel Berg
2023-08-11 12:32 ` Ihor Radchenko
2023-08-11 14:14 ` Mattias Engdegård
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 12:10 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> In theory Lisp can be as fast as any other language but in
>> practice it is not the case with Elisp and Emacs at least.
>>
>> Here is a n experiment with stats how Emacs/Elisp compares
>> to SBCL/CL, for this particular one it shows that Elisp,
>> even natively compiled, is still +78875% slower than
>> Common Lisp.
>>
>> ...
>> (defun fib (reps num)
>> (let ((z 0))
>> (dotimes (_ reps)
>> (let ((p1 1)
>> (p2 1))
>> (dotimes (_ (- num 2))
>> (setf z (+ p1 p2)
>> p2 p1
>> p1 z))))
>> z))
>>
>> (let ((beg (float-time)))
>> (fib 10000 1000)
>> (message "%.3f s" (- (float-time) beg)) )
>
> Most of the time is spent in (1) GC; (2) Creating bigint:
>
> perf record emacs -Q -batch -l /tmp/fib.eln
>
> perf report:
>
> Creating bignums:
> 40.95% emacs emacs [.] allocate_vectorlike
> GC:
> 20.21% emacs emacs [.] process_mark_stack
> 3.41% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
> GC:
> 3.21% emacs emacs [.] mark_char_table
> 2.82% emacs emacs [.] pdumper_marked_p_impl
> 2.23% emacs libc.so.6 [.] 0x0000000000090076
> 1.78% emacs libgmp.so.10.5.0 [.] __gmpz_add
> 1.71% emacs emacs [.] pdumper_set_marked_impl
> 1.59% emacs emacs [.] arith_driver
> 1.31% emacs libc.so.6 [.] malloc
> GC:
> 1.15% emacs emacs [.] sweep_vectors
> 1.03% emacs libgmp.so.10.5.0 [.] __gmpn_add_n_coreisbr
> 0.88% emacs libc.so.6 [.] cfree
> 0.87% emacs fib.eln [.] F666962_fib_0
> 0.85% emacs emacs [.] check_number_coerce_marker
> 0.80% emacs libc.so.6 [.] 0x0000000000091043
> 0.74% emacs emacs [.] allocate_pseudovector
> 0.65% emacs emacs [.] Flss
> 0.57% emacs libgmp.so.10.5.0 [.] __gmpz_realloc
> 0.56% emacs emacs [.] make_bignum_bits
>
> My conclusion from this is that big number implementation is
> not optimal. Mostly because it does not reuse the existing
> bignum objects and always create new ones - every single
> time we perform an arithmetic operation.
Okay, interesting, how can you see that from the above data?
So is this a problem with the compiler? Or some
associated library?
If so, I'll see if I can upgrade gcc to gcc 13 and see if that
improves it, maybe they already fixed it ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Bignum performance (was: Shrinking the C core)
2023-08-11 12:10 ` Emanuel Berg
@ 2023-08-11 12:32 ` Ihor Radchenko
2023-08-11 12:38 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-11 12:32 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> perf record emacs -Q -batch -l /tmp/fib.eln
>>
>> perf report:
>>
>> Creating bignums:
>> 40.95% emacs emacs [.] allocate_vectorlike
>> GC:
>> 20.21% emacs emacs [.] process_mark_stack
>> ...
>> My conclusion from this is that big number implementation is
>> not optimal. Mostly because it does not reuse the existing
>> bignum objects and always create new ones - every single
>> time we perform an arithmetic operation.
>
> Okay, interesting, how can you see that from the above data?
process_mark_stack is the GC routine. And I see no other reason to call
allocate_vectorlike so much except allocating new bignum objects (which
are vectorlike; see src/lisp.h:pvec_type and src/bignum.h:Lisp_Bignum).
> So is this a problem with the compiler? Or some
> associated library?
GC is the well-known problem of garbage-collector being slow when we
allocate a large number of objects.
And the fact that we allocate many objects is related to immutability of
bignums. Every time we do (setq bignum (* bignum fixint)), we abandon
the old object holding BIGNUM value and allocate a new bignum object
with a new value. Clearly, this allocation is not free and takes a lot
of CPU time. While the computation itself is fast.
Maybe we could somehow re-use the already allocated bignum objects,
similar to what is done for cons cells (see src/alloc.c:Fcons).
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Bignum performance (was: Shrinking the C core)
2023-08-11 12:32 ` Ihor Radchenko
@ 2023-08-11 12:38 ` Emanuel Berg
2023-08-11 14:07 ` [PATCH] " Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 12:38 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
> And the fact that we allocate many objects is related to
> immutability of bignums. Every time we do (setq bignum (*
> bignum fixint)), we abandon the old object holding BIGNUM
> value and allocate a new bignum object with a new value.
> Clearly, this allocation is not free and takes a lot of CPU
> time. While the computation itself is fast.
So this happens in Emacs C code, OK.
> Maybe we could somehow re-use the already allocated bignum
> objects, similar to what is done for cons cells (see
> src/alloc.c:Fcons).
Sounds reasonable :)
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-11 12:38 ` Emanuel Berg
@ 2023-08-11 14:07 ` Ihor Radchenko
2023-08-11 18:06 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-11 14:07 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
[-- Attachment #1: Type: text/plain, Size: 3633 bytes --]
Emanuel Berg <incal@dataswamp.org> writes:
>> Maybe we could somehow re-use the already allocated bignum
>> objects, similar to what is done for cons cells (see
>> src/alloc.c:Fcons).
>
> Sounds reasonable :)
And... is has been already done, actually.
allocate_vectorlike calls allocate_vector_from_block, which re-uses
pre-allocated objects.
And looking into the call graph, this exact branch calling
allocate_vector_from_block is indeed called for the bignums:
33.05% 0.00% emacs [unknown] [.] 0000000000000000
|
---0
|
|--28.04%--allocate_vectorlike
| |
| --27.78%--allocate_vector_from_block (inlined)
| |
| |--2.13%--next_vector (inlined)
| |
| --0.74%--setup_on_free_list (inlined)
If it manually cut off `allocate_vector_from_block', the benchmark time
increases twice. So, there is already some improvement coming from
re-using allocated memory.
I looked deeper into the code tried to cut down on unnecessary looping
over the pre-allocated `vector_free_lists'. See the attached patch.
Without the patch:
perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
2.321 s
28.60% emacs emacs [.] allocate_vectorlike
24.36% emacs emacs [.] process_mark_stack
3.76% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
3.59% emacs emacs [.] pdumper_marked_p_impl
3.53% emacs emacs [.] mark_char_table
With the patch:
perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
1.968 s
33.17% emacs emacs [.] process_mark_stack
5.51% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
5.05% emacs emacs [.] mark_char_table
4.88% emacs emacs [.] pdumper_marked_p_impl
3.30% emacs emacs [.] pdumper_set_marked_impl
...
2.52% emacs emacs [.] allocate_vectorlike
allocate_vectorlike clearly takes a lot less time by not trying to loop
over all the ~500 empty elements of vector_free_lists.
We can further get rid of the GC by temporarily disabling it (just for
demonstration):
(let ((beg (float-time)))
(setq gc-cons-threshold most-positive-fixnum)
(fib 10000 1000)
(message "%.3f s" (- (float-time) beg)) )
perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
0.739 s
17.11% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
7.35% emacs libgmp.so.10.5.0 [.] __gmpz_add
6.51% emacs emacs [.] arith_driver
6.03% emacs libc.so.6 [.] malloc
5.57% emacs emacs [.] allocate_vectorlike
5.20% emacs [unknown] [k] 0xffffffffaae01857
4.16% emacs libgmp.so.10.5.0 [.] __gmpn_add_n_coreisbr
3.72% emacs emacs [.] check_number_coerce_marker
3.35% emacs fib.eln [.] F666962_fib_0
3.29% emacs emacs [.] allocate_pseudovector
2.30% emacs emacs [.] Flss
Now, the actual bignum arithmetics (lisp/gmp.c) takes most of the CPU time.
I am not sure what differs between Elisp gmp bindings and analogous SBCL
binding so that SBCL is so much faster.
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: allocate_vector_from_block.diff --]
[-- Type: text/x-patch, Size: 1712 bytes --]
diff --git a/src/alloc.c b/src/alloc.c
index 17ca5c725d0..62e96b4c9de 100644
--- a/src/alloc.c
+++ b/src/alloc.c
@@ -3140,6 +3140,7 @@ large_vector_vec (struct large_vector *p)
vectors of the same NBYTES size, so NTH == VINDEX (NBYTES). */
static struct Lisp_Vector *vector_free_lists[VECTOR_MAX_FREE_LIST_INDEX];
+static int vector_free_lists_min_idx = VECTOR_MAX_FREE_LIST_INDEX;
/* Singly-linked list of large vectors. */
@@ -3176,6 +3177,8 @@ setup_on_free_list (struct Lisp_Vector *v, ptrdiff_t nbytes)
set_next_vector (v, vector_free_lists[vindex]);
ASAN_POISON_VECTOR_CONTENTS (v, nbytes - header_size);
vector_free_lists[vindex] = v;
+ if ( vindex < vector_free_lists_min_idx )
+ vector_free_lists_min_idx = vindex;
}
/* Get a new vector block. */
@@ -3230,8 +3233,8 @@ allocate_vector_from_block (ptrdiff_t nbytes)
/* Next, check free lists containing larger vectors. Since
we will split the result, we should have remaining space
large enough to use for one-slot vector at least. */
- for (index = VINDEX (nbytes + VBLOCK_BYTES_MIN);
- index < VECTOR_MAX_FREE_LIST_INDEX; index++)
+ for (index = max ( VINDEX (nbytes + VBLOCK_BYTES_MIN), vector_free_lists_min_idx );
+ index < VECTOR_MAX_FREE_LIST_INDEX; index++, vector_free_lists_min_idx++)
if (vector_free_lists[index])
{
/* This vector is larger than requested. */
@@ -3413,6 +3416,7 @@ sweep_vectors (void)
gcstat.total_vectors = 0;
gcstat.total_vector_slots = gcstat.total_free_vector_slots = 0;
memset (vector_free_lists, 0, sizeof (vector_free_lists));
+ vector_free_lists_min_idx = VECTOR_MAX_FREE_LIST_INDEX;
/* Looking through vector blocks. */
[-- Attachment #3: Type: text/plain, Size: 224 bytes --]
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply related [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-11 14:07 ` [PATCH] " Ihor Radchenko
@ 2023-08-11 18:06 ` Emanuel Berg
2023-08-11 19:41 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 18:06 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>>> Maybe we could somehow re-use the already allocated bignum
>>> objects, similar to what is done for cons cells (see
>>> src/alloc.c:Fcons).
>>
>> Sounds reasonable :)
>
> And... is has been already done, actually.
> allocate_vectorlike calls allocate_vector_from_block, which
> re-uses pre-allocated objects.
>
> And looking into the call graph, this exact branch calling
> allocate_vector_from_block is indeed called for the bignums [...]
Are we talking a list of Emacs C functions executing with the
corresponding times they have been in execution in a tree data
structure? :O
E.g. where do we find allocate_vectorlike ?
> With the patch:
>
> perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
> 1.968 s
>
> 33.17% emacs emacs [.] process_mark_stack
> 5.51% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
> 5.05% emacs emacs [.] mark_char_table
> 4.88% emacs emacs [.] pdumper_marked_p_impl
> 3.30% emacs emacs [.] pdumper_set_marked_impl
> ...
> 2.52% emacs emacs [.] allocate_vectorlike
>
> allocate_vectorlike clearly takes a lot less time by not trying to loop
> over all the ~500 empty elements of vector_free_lists.
>
> We can further get rid of the GC by temporarily disabling it (just for
> demonstration):
>
> (let ((beg (float-time)))
> (setq gc-cons-threshold most-positive-fixnum)
> (fib 10000 1000)
> (message "%.3f s" (- (float-time) beg)) )
>
> perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
> 0.739 s
>
> 17.11% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
> 7.35% emacs libgmp.so.10.5.0 [.] __gmpz_add
> 6.51% emacs emacs [.] arith_driver
> 6.03% emacs libc.so.6 [.] malloc
> 5.57% emacs emacs [.] allocate_vectorlike
> 5.20% emacs [unknown] [k] 0xffffffffaae01857
> 4.16% emacs libgmp.so.10.5.0 [.] __gmpn_add_n_coreisbr
> 3.72% emacs emacs [.] check_number_coerce_marker
> 3.35% emacs fib.eln [.] F666962_fib_0
> 3.29% emacs emacs [.] allocate_pseudovector
> 2.30% emacs emacs [.] Flss
>
> Now, the actual bignum arithmetics (lisp/gmp.c) takes most of the CPU time.
>
> I am not sure what differs between Elisp gmp bindings and analogous SBCL
> binding so that SBCL is so much faster.
>
> diff --git a/src/alloc.c b/src/alloc.c
> index 17ca5c725d0..62e96b4c9de 100644
> --- a/src/alloc.c
> +++ b/src/alloc.c
> @@ -3140,6 +3140,7 @@ large_vector_vec (struct large_vector *p)
> vectors of the same NBYTES size, so NTH == VINDEX (NBYTES). */
>
> static struct Lisp_Vector *vector_free_lists[VECTOR_MAX_FREE_LIST_INDEX];
> +static int vector_free_lists_min_idx = VECTOR_MAX_FREE_LIST_INDEX;
>
> /* Singly-linked list of large vectors. */
>
> @@ -3176,6 +3177,8 @@ setup_on_free_list (struct Lisp_Vector *v, ptrdiff_t
> nbytes)
> set_next_vector (v, vector_free_lists[vindex]);
> ASAN_POISON_VECTOR_CONTENTS (v, nbytes - header_size);
> vector_free_lists[vindex] = v;
> + if ( vindex < vector_free_lists_min_idx )
> + vector_free_lists_min_idx = vindex;
> }
>
> /* Get a new vector block. */
> @@ -3230,8 +3233,8 @@ allocate_vector_from_block (ptrdiff_t nbytes)
> /* Next, check free lists containing larger vectors. Since
> we will split the result, we should have remaining space
> large enough to use for one-slot vector at least. */
> - for (index = VINDEX (nbytes + VBLOCK_BYTES_MIN);
> - index < VECTOR_MAX_FREE_LIST_INDEX; index++)
> + for (index = max ( VINDEX (nbytes + VBLOCK_BYTES_MIN),
> vector_free_lists_min_idx );
> + index < VECTOR_MAX_FREE_LIST_INDEX; index++,
> vector_free_lists_min_idx++)
> if (vector_free_lists[index])
> {
> /* This vector is larger than requested. */
> @@ -3413,6 +3416,7 @@ sweep_vectors (void)
> gcstat.total_vectors = 0;
> gcstat.total_vector_slots = gcstat.total_free_vector_slots = 0;
> memset (vector_free_lists, 0, sizeof (vector_free_lists));
> + vector_free_lists_min_idx = VECTOR_MAX_FREE_LIST_INDEX;
>
> /* Looking through vector blocks. */
Amazing! :O
See if you can do my original test, which was 1-3 Elisp,
byte-compiled Elisp, and natively compiled Elisp, and the
Common Lisp execution (on your computer), if you'd like.
Actually it is a bit of a bummer to the community since Emacs
is like THE portal into Lisp. We should have the best Lisp in
the business, and I don't see why not? Emacs + SBCL + CL +
Elisp anyone?
I.e. real CL not the cl- which is actually in Elisp. Not that
there is anything wrong with that! On the contrary ;)
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-11 18:06 ` Emanuel Berg
@ 2023-08-11 19:41 ` Ihor Radchenko
2023-08-11 19:50 ` Emanuel Berg
2023-08-11 22:46 ` Emanuel Berg
0 siblings, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-11 19:41 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> And... is has been already done, actually.
>> allocate_vectorlike calls allocate_vector_from_block, which
>> re-uses pre-allocated objects.
>>
>> And looking into the call graph, this exact branch calling
>> allocate_vector_from_block is indeed called for the bignums [...]
>
> Are we talking a list of Emacs C functions executing with the
> corresponding times they have been in execution in a tree data
> structure? :O
That's what GNU perf does - it is a sampling profiler in GNU/Linux.
The Elisp equivalent is profiler.el, but it does not reveal underlying C
functions.
> E.g. where do we find allocate_vectorlike ?
I have listed the commands I used (from terminal):
1. perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
<records CPU stats while running emacs>
2. perf report
<displays the stats>
You need Emacs compiled with debug symbols the get meaningful output.
See more at https://www.brendangregg.com/perf.html
> See if you can do my original test, which was 1-3 Elisp,
> byte-compiled Elisp, and natively compiled Elisp, and the
> Common Lisp execution (on your computer), if you'd like.
As you wish:
$ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.el [5.783 s]
$ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.elc [1.961 s]
$ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln [1.901 s]
$ SBCL_HOME=/usr/lib64/sbcl sbcl --load /tmp/fib.cl [0.007 s]
without the patch (on my system)
$ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.el [6.546 s]
$ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.elc [2.498 s]
$ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln [2.518 s]
Also, the patch gives improvements for more than just bignums.
I ran elisp-benchmarks
(https://elpa.gnu.org/packages/elisp-benchmarks.html) and got
(before the patch)
| test | non-gc avg (s) | gc avg (s) | gcs avg | tot avg (s) | tot avg err (s) |
|--------------------+----------------+------------+---------+-------------+-----------------|
| bubble | 0.70 | 0.06 | 1 | 0.76 | 0.07 |
| bubble-no-cons | 1.17 | 0.00 | 0 | 1.17 | 0.02 |
| bytecomp | 1.74 | 0.29 | 13 | 2.03 | 0.12 |
| dhrystone | 2.30 | 0.00 | 0 | 2.30 | 0.07 |
| eieio | 1.25 | 0.13 | 7 | 1.38 | 0.03 |
| fibn | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-named-let | 1.53 | 0.00 | 0 | 1.53 | 0.03 |
| fibn-rec | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-tc | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| flet | 1.48 | 0.00 | 0 | 1.48 | 0.04 |
| inclist | 1.07 | 0.00 | 0 | 1.07 | 0.02 |
| inclist-type-hints | 1.00 | 0.00 | 0 | 1.00 | 0.07 |
| listlen-tc | 0.13 | 0.00 | 0 | 0.13 | 0.03 |
| map-closure | 5.26 | 0.00 | 0 | 5.26 | 0.09 |
| nbody | 1.61 | 0.17 | 1 | 1.78 | 0.06 |
| pack-unpack | 0.31 | 0.02 | 1 | 0.33 | 0.00 |
| pack-unpack-old | 0.50 | 0.05 | 3 | 0.55 | 0.02 |
| pcase | 1.85 | 0.00 | 0 | 1.85 | 0.05 |
| pidigits | 4.41 | 0.96 | 17 | 5.37 | 0.13 |
| scroll | 0.64 | 0.00 | 0 | 0.64 | 0.01 |
| smie | 1.59 | 0.04 | 2 | 1.63 | 0.03 |
|--------------------+----------------+------------+---------+-------------+-----------------|
| total | 28.54 | 1.72 | 45 | 30.26 | 0.26 |
(after the patch)
| test | non-gc avg (s) | gc avg (s) | gcs avg | tot avg (s) | tot avg err (s) |
|--------------------+----------------+------------+---------+-------------+-----------------|
| bubble | 0.68 | 0.05 | 1 | 0.73 | 0.04 |
| bubble-no-cons | 1.00 | 0.00 | 0 | 1.00 | 0.04 |
| bytecomp | 1.60 | 0.23 | 13 | 1.82 | 0.16 |
| dhrystone | 2.03 | 0.00 | 0 | 2.03 | 0.05 |
| eieio | 1.08 | 0.12 | 7 | 1.20 | 0.07 |
| fibn | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-named-let | 1.44 | 0.00 | 0 | 1.44 | 0.12 |
| fibn-rec | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-tc | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| flet | 1.36 | 0.00 | 0 | 1.36 | 0.09 |
| inclist | 1.00 | 0.00 | 0 | 1.00 | 0.00 |
| inclist-type-hints | 1.00 | 0.00 | 0 | 1.00 | 0.07 |
| listlen-tc | 0.11 | 0.00 | 0 | 0.11 | 0.02 |
| map-closure | 4.91 | 0.00 | 0 | 4.91 | 0.12 |
| nbody | 1.47 | 0.17 | 1 | 1.64 | 0.08 |
| pack-unpack | 0.29 | 0.02 | 1 | 0.31 | 0.01 |
| pack-unpack-old | 0.43 | 0.05 | 3 | 0.48 | 0.01 |
| pcase | 1.84 | 0.00 | 0 | 1.84 | 0.07 |
| pidigits | 3.16 | 0.94 | 17 | 4.11 | 0.10 |
| scroll | 0.58 | 0.00 | 0 | 0.58 | 0.00 |
| smie | 1.40 | 0.04 | 2 | 1.44 | 0.06 |
|--------------------+----------------+------------+---------+-------------+-----------------|
| total | 25.38 | 1.62 | 45 | 27.00 | 0.32 |
About ~10% improvement, with each individual benchmark being faster.
Note how fibn test takes 0.00 seconds. It is limited to fixnum range.
> Actually it is a bit of a bummer to the community since Emacs
> is like THE portal into Lisp. We should have the best Lisp in
> the business, and I don't see why not? Emacs + SBCL + CL +
> Elisp anyone?
This is a balancing act. Elisp is tailored for Emacs as an editor. So,
trade-offs are inevitable. I am skeptical about Elisp overperforming CL.
But it does not mean that we should not try to improve things.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-11 19:41 ` Ihor Radchenko
@ 2023-08-11 19:50 ` Emanuel Berg
2023-08-12 8:24 ` Ihor Radchenko
2023-08-11 22:46 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 19:50 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> See if you can do my original test, which was 1-3 Elisp,
>> byte-compiled Elisp, and natively compiled Elisp, and the
>> Common Lisp execution (on your computer), if you'd like.
>
> As you wish:
>
> $ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.el [5.783 s]
> $ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.elc [1.961 s]
> $ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln [1.901 s]
> $ SBCL_HOME=/usr/lib64/sbcl sbcl --load /tmp/fib.cl [0.007 s]
>
> without the patch (on my system)
>
> $ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.el [6.546 s]
> $ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.elc [2.498 s]
> $ ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln [2.518 s]
The stats seem to speak one language ...
> Also, the patch gives improvements for more than just
> bignums
> | test | non-gc avg (s) | gc avg (s) | gcs avg | tot avg (s)
> | tot avg err (s) |
> |--------------------+----------------+------------+---------+-------------+-----------------|
> | bubble | 0.70 | 0.06 | 1 | 0.76 | 0.07 |
> | bubble-no-cons | 1.17 | 0.00 | 0 | 1.17 | 0.02 |
> | bytecomp | 1.74 | 0.29 | 13 | 2.03 | 0.12 |
> | dhrystone | 2.30 | 0.00 | 0 | 2.30 | 0.07 |
> | eieio | 1.25 | 0.13 | 7 | 1.38 | 0.03 |
> | fibn | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
> | fibn-named-let | 1.53 | 0.00 | 0 | 1.53 | 0.03 |
> | fibn-rec | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
> | fibn-tc | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
> | flet | 1.48 | 0.00 | 0 | 1.48 | 0.04 |
> | inclist | 1.07 | 0.00 | 0 | 1.07 | 0.02 |
> | inclist-type-hints | 1.00 | 0.00 | 0 | 1.00 | 0.07 |
> | listlen-tc | 0.13 | 0.00 | 0 | 0.13 | 0.03 |
> | map-closure | 5.26 | 0.00 | 0 | 5.26 | 0.09 |
> | nbody | 1.61 | 0.17 | 1 | 1.78 | 0.06 |
> | pack-unpack | 0.31 | 0.02 | 1 | 0.33 | 0.00 |
> | pack-unpack-old | 0.50 | 0.05 | 3 | 0.55 | 0.02 |
> | pcase | 1.85 | 0.00 | 0 | 1.85 | 0.05 |
> | pidigits | 4.41 | 0.96 | 17 | 5.37 | 0.13 |
> | scroll | 0.64 | 0.00 | 0 | 0.64 | 0.01 |
> | smie | 1.59 | 0.04 | 2 | 1.63 | 0.03 |
> |--------------------+----------------+------------+---------+-------------+-----------------|
> | total | 28.54 | 1.72 | 45 | 30.26 | 0.26 |
>
> (after the patch)
> | test | non-gc avg (s) | gc avg (s) | gcs avg | tot avg (s)
> | tot avg err (s) |
> |--------------------+----------------+------------+---------+-------------+-----------------|
> | bubble | 0.68 | 0.05 | 1 | 0.73 | 0.04 |
> | bubble-no-cons | 1.00 | 0.00 | 0 | 1.00 | 0.04 |
> | bytecomp | 1.60 | 0.23 | 13 | 1.82 | 0.16 |
> | dhrystone | 2.03 | 0.00 | 0 | 2.03 | 0.05 |
> | eieio | 1.08 | 0.12 | 7 | 1.20 | 0.07 |
> | fibn | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
> | fibn-named-let | 1.44 | 0.00 | 0 | 1.44 | 0.12 |
> | fibn-rec | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
> | fibn-tc | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
> | flet | 1.36 | 0.00 | 0 | 1.36 | 0.09 |
> | inclist | 1.00 | 0.00 | 0 | 1.00 | 0.00 |
> | inclist-type-hints | 1.00 | 0.00 | 0 | 1.00 | 0.07 |
> | listlen-tc | 0.11 | 0.00 | 0 | 0.11 | 0.02 |
> | map-closure | 4.91 | 0.00 | 0 | 4.91 | 0.12 |
> | nbody | 1.47 | 0.17 | 1 | 1.64 | 0.08 |
> | pack-unpack | 0.29 | 0.02 | 1 | 0.31 | 0.01 |
> | pack-unpack-old | 0.43 | 0.05 | 3 | 0.48 | 0.01 |
> | pcase | 1.84 | 0.00 | 0 | 1.84 | 0.07 |
> | pidigits | 3.16 | 0.94 | 17 | 4.11 | 0.10 |
> | scroll | 0.58 | 0.00 | 0 | 0.58 | 0.00 |
> | smie | 1.40 | 0.04 | 2 | 1.44 | 0.06 |
> |--------------------+----------------+------------+---------+-------------+-----------------|
> | total | 25.38 | 1.62 | 45 | 27.00 | 0.32 |
>
> About ~10% improvement, with each individual benchmark being faster.
Ten percent? We take it :)
> Note how fibn test takes 0.00 seconds. It is limited to
> fixnum range.
What does that say/mean?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-11 19:50 ` Emanuel Berg
@ 2023-08-12 8:24 ` Ihor Radchenko
2023-08-12 16:03 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-12 8:24 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> Note how fibn test takes 0.00 seconds. It is limited to
>> fixnum range.
>
> What does that say/mean?
It tells that normal int operations are much, much faster compared to bigint.
So, your benchmark is rather esoteric if we consider normal usage patterns.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-12 8:24 ` Ihor Radchenko
@ 2023-08-12 16:03 ` Emanuel Berg
2023-08-13 9:09 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 16:03 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>>> Note how fibn test takes 0.00 seconds. It is limited to
>>> fixnum range.
>>
>> What does that say/mean?
>
> It tells that normal int operations are much, much faster
> compared to bigint. So, your benchmark is rather esoteric if
> we consider normal usage patterns.
Didn't you provide a whole set of benchmarks with an
approximated gain of 10%? Maybe some esoteric nature is
built-in in the benchmark concept ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-12 16:03 ` Emanuel Berg
@ 2023-08-13 9:09 ` Ihor Radchenko
2023-08-13 9:49 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-13 9:09 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> It tells that normal int operations are much, much faster
>> compared to bigint. So, your benchmark is rather esoteric if
>> we consider normal usage patterns.
>
> Didn't you provide a whole set of benchmarks with an
> approximated gain of 10%? Maybe some esoteric nature is
> built-in in the benchmark concept ...
The main problem your benchmark demonstrated is with bignum.
By accident, it also revealed slight inefficiency in vector allocation,
but this inefficiency is nowhere near SBCL 0.007 sec vs. Elisp 2.5 sec.
In practice, as more generic benchmarks demonstrated, we only had 10%
performance hit. Not something to claim that Elisp is much slower
compared to CL.
It would be more useful to compare CL with Elisp using less specialized
benchmarks that do not involve bignums. As Mattias commented, we do not
care much about bignum performance in Elisp - it is a rarely used
feature; we are content that it simply works, even if not fast, and the
core contributors (at least, Mattias) are not seeing improving bignums
as their priority.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-13 9:09 ` Ihor Radchenko
@ 2023-08-13 9:49 ` Emanuel Berg
2023-08-13 10:21 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-13 9:49 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
> The main problem your benchmark demonstrated is with bignum.
> By accident, it also revealed slight inefficiency in vector
> allocation, but this inefficiency is nowhere near SBCL 0.007
> sec vs. Elisp 2.5 sec.
Yeah, we can't have that.
> In practice, as more generic benchmarks demonstrated, we
> only had 10% performance hit. Not something to claim that
> Elisp is much slower compared to CL.
What do you mean, generic +10% is a huge improvement.
> It would be more useful to compare CL with Elisp using less
> specialized benchmarks that do not involve bignums.
> As Mattias commented, we do not care much about bignum
> performance in Elisp - it is a rarely used feature; we are
> content that it simply works, even if not fast, and the core
> contributors (at least, Mattias) are not seeing improving
> bignums as their priority.
But didn't your patch do that already?
It would indicate that it is possible to do it all in/to
Elisp, which would be the best way to solve this problem _and_
not have any of the integration, maybe portability issues
described ...
So 1, the first explanation why CL is much faster is another
implementation of bignums handling which is faster in CL, if
that has already been solved here absolutely no reason not to
include it as 10% is a huge gain, even more so for a whole set
of benchmarks.
Instead of relying on a single benchmark, one should have
a set of benchmarks and every benchmark should have a purpose,
this doesn't have to be so involved tho, for example "bignums"
could be the purpose of my benchmark, so one would have
several, say a dozen, each with the purpose of slowing the
computer down with respect to some aspect or known
situation that one would try to provoke ... It can be
well-known algorithms for that matter.
One would then do the same thing in CL and see, where do CL
perform much better? The next question would be, why?
If it is just about piling up +10%, let's do it!
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-13 9:49 ` Emanuel Berg
@ 2023-08-13 10:21 ` Ihor Radchenko
2023-08-14 2:20 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-13 10:21 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> In practice, as more generic benchmarks demonstrated, we
>> only had 10% performance hit. Not something to claim that
>> Elisp is much slower compared to CL.
>
> What do you mean, generic +10% is a huge improvement.
It is, but it is also tangent to comparison between Elisp and CL. The
main (AFAIU) difference between Elisp and CL is in how the bignums are
stored. Elisp uses its own internal object type while CL uses GMP's
native format. And we have huge overheads converting things
back-and-forth between GMP and Elisp formats. It is by choice. And my
patch did not do anything about this difference.
Also, +10% is just on my machine. We need someone else to test things
before jumping to far-reaching conclusions. I plan to submit the patch
in a less ad-hoc state later, as a separate ticket.
>> It would be more useful to compare CL with Elisp using less
>> specialized benchmarks that do not involve bignums.
>> As Mattias commented, we do not care much about bignum
>> performance in Elisp - it is a rarely used feature; we are
>> content that it simply works, even if not fast, and the core
>> contributors (at least, Mattias) are not seeing improving
>> bignums as their priority.
>
> But didn't your patch do that already?
No. The benchmark only compared between Elisp before/after the patch.
Not with CL.
> Instead of relying on a single benchmark, one should have
> a set of benchmarks and every benchmark should have a purpose,
> this doesn't have to be so involved tho, for example "bignums"
> could be the purpose of my benchmark, so one would have
> several, say a dozen, each with the purpose of slowing the
> computer down with respect to some aspect or known
> situation that one would try to provoke ... It can be
> well-known algorithms for that matter.
>
> One would then do the same thing in CL and see, where do CL
> perform much better? The next question would be, why?
Sure. Feel free to share such benchmark for Elisp vs. CL. I only know
the benchmark library for Elisp. No equivalent comparable benchmark for
CL.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-13 10:21 ` Ihor Radchenko
@ 2023-08-14 2:20 ` Emanuel Berg
2023-08-14 2:42 ` [PATCH] Re: Bignum performance Po Lu
2023-08-14 7:20 ` [PATCH] Re: Bignum performance (was: Shrinking the C core) Ihor Radchenko
0 siblings, 2 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-14 2:20 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>>> In practice, as more generic benchmarks demonstrated, we
>>> only had 10% performance hit. Not something to claim that
>>> Elisp is much slower compared to CL.
>>
>> What do you mean, generic +10% is a huge improvement.
>
> It is, but it is also tangent to comparison between Elisp
> and CL. The main (AFAIU) difference between Elisp and CL is
> in how the bignums are stored. Elisp uses its own internal
> object type while CL uses GMP's native format.
GMP = GNU Multiple Precision Arithmetic Library.
https://en.wikipedia.org/wiki/GNU_Multiple_Precision_Arithmetic_Library
> And we have huge overheads converting things back-and-forth
> between GMP and Elisp formats. It is by choice. And my patch
> did not do anything about this difference.
But that's all the better, your patch solved (very likely) the
problem and did so without causing havoc by trying to forcibly
merge opposing solutions.
And the method was: instead of reallocating new objects for
bignums, we are no reusing existing allocations for new data?
>>> It would be more useful to compare CL with Elisp using
>>> less specialized benchmarks that do not involve bignums.
>>> As Mattias commented, we do not care much about bignum
>>> performance in Elisp - it is a rarely used feature; we are
>>> content that it simply works, even if not fast, and the
>>> core contributors (at least, Mattias) are not seeing
>>> improving bignums as their priority.
>>
>> But didn't your patch do that already?
>
> No. The benchmark only compared between Elisp before/after
> the patch. Not with CL.
No, that much I understood. It was Elisp before and after the
patch, as you say. Isn't before/after all data you need? Nah,
it can be useful to have an external reference as well and
here were are also hoping we can use the benchmarks to answer
they question if CL is just so much faster in general, or if
there are certain areas where it excels - and if so - what
those areas are and what they contain to unlock all
that speed.
>> Instead of relying on a single benchmark, one should have
>> a set of benchmarks and every benchmark should have
>> a purpose, this doesn't have to be so involved tho, for
>> example "bignums" could be the purpose of my benchmark, so
>> one would have several, say a dozen, each with the purpose
>> of slowing the computer down with respect to some aspect or
>> known situation that one would try to provoke ... It can be
>> well-known algorithms for that matter.
>>
>> One would then do the same thing in CL and see, where do CL
>> perform much better? The next question would be, why?
>
> Sure. Feel free to share such benchmark for Elisp vs. CL.
> I only know the benchmark library for Elisp. No equivalent
> comparable benchmark for CL.
I'm working on it! This will be very interesting, for sure.
The need for speed - but in a very methodical way ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance
2023-08-14 2:20 ` Emanuel Berg
@ 2023-08-14 2:42 ` Po Lu
2023-08-14 4:16 ` Emanuel Berg
2023-08-14 7:15 ` Ihor Radchenko
2023-08-14 7:20 ` [PATCH] Re: Bignum performance (was: Shrinking the C core) Ihor Radchenko
1 sibling, 2 replies; 247+ messages in thread
From: Po Lu @ 2023-08-14 2:42 UTC (permalink / raw)
To: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> Ihor Radchenko wrote:
>
>>>> In practice, as more generic benchmarks demonstrated, we
>>>> only had 10% performance hit. Not something to claim that
>>>> Elisp is much slower compared to CL.
>>>
>>> What do you mean, generic +10% is a huge improvement.
>>
>> It is, but it is also tangent to comparison between Elisp
>> and CL. The main (AFAIU) difference between Elisp and CL is
>> in how the bignums are stored. Elisp uses its own internal
>> object type while CL uses GMP's native format.
>
> GMP = GNU Multiple Precision Arithmetic Library.
>
> https://en.wikipedia.org/wiki/GNU_Multiple_Precision_Arithmetic_Library
>
>> And we have huge overheads converting things back-and-forth
>> between GMP and Elisp formats. It is by choice. And my patch
>> did not do anything about this difference.
AFAIU, no conversion takes place between ``Elisp formats'' and GMP
formats. Our bignums rely on GMP for all data storage and memory
allocation.
struct Lisp_Bignum
{
union vectorlike_header header;
mpz_t value; <-------------------- GMP type
} GCALIGNED_STRUCT;
and finally:
INLINE mpz_t const *
bignum_val (struct Lisp_Bignum const *i)
{
return &i->value;
}
INLINE mpz_t const *
xbignum_val (Lisp_Object i)
{
return bignum_val (XBIGNUM (i));
}
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance
2023-08-14 2:42 ` [PATCH] Re: Bignum performance Po Lu
@ 2023-08-14 4:16 ` Emanuel Berg
2023-08-14 7:15 ` Ihor Radchenko
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-14 4:16 UTC (permalink / raw)
To: emacs-devel
Po Lu wrote:
> AFAIU, no conversion takes place between ``Elisp formats''
> and GMP formats. Our bignums rely on GMP for all data
> storage and memory allocation.
There was a problem with that as likely indicated by the
Fibonacci benchmark, that situation has hopefully been patched
by now or will so be, so now it will be interesting to see if
we can identify other such areas, and if they can be solved as
effortlessly ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance
2023-08-14 2:42 ` [PATCH] Re: Bignum performance Po Lu
2023-08-14 4:16 ` Emanuel Berg
@ 2023-08-14 7:15 ` Ihor Radchenko
2023-08-14 7:50 ` Po Lu
2023-08-15 14:28 ` Emanuel Berg
1 sibling, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-14 7:15 UTC (permalink / raw)
To: Po Lu; +Cc: emacs-devel
Po Lu <luangruo@yahoo.com> writes:
>>> And we have huge overheads converting things back-and-forth
>>> between GMP and Elisp formats. It is by choice. And my patch
>>> did not do anything about this difference.
>
> AFAIU, no conversion takes place between ``Elisp formats'' and GMP
> formats. Our bignums rely on GMP for all data storage and memory
> allocation.
Thanks for the clarification!
So, GMP is not as fast as SBCL's implementation after all.
SBCL uses https://github.com/sbcl/sbcl/blob/master/src/code/bignum.lisp
- a custom bignum implementation, which is clearly faster compared to
GMP (in the provided benchmark):
perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
0.739 s
(0.007 s for SBCL)
17.11% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
7.35% emacs libgmp.so.10.5.0 [.] __gmpz_add
^^ already >0.1 sec.
6.51% emacs emacs [.] arith_driver
6.03% emacs libc.so.6 [.] malloc
5.57% emacs emacs [.] allocate_vectorlike
5.20% emacs [unknown] [k] 0xffffffffaae01857
4.16% emacs libgmp.so.10.5.0 [.] __gmpn_add_n_coreisbr
3.72% emacs emacs [.] check_number_coerce_marker
3.35% emacs fib.eln [.] F666962_fib_0
3.29% emacs emacs [.] allocate_pseudovector
2.30% emacs emacs [.] Flss
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance
2023-08-14 7:15 ` Ihor Radchenko
@ 2023-08-14 7:50 ` Po Lu
2023-08-14 9:28 ` Ihor Radchenko
2023-08-15 14:28 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Po Lu @ 2023-08-14 7:50 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> Po Lu <luangruo@yahoo.com> writes:
>
>>>> And we have huge overheads converting things back-and-forth
>>>> between GMP and Elisp formats. It is by choice. And my patch
>>>> did not do anything about this difference.
>>
>> AFAIU, no conversion takes place between ``Elisp formats'' and GMP
>> formats. Our bignums rely on GMP for all data storage and memory
>> allocation.
>
> Thanks for the clarification!
> So, GMP is not as fast as SBCL's implementation after all.
> SBCL uses https://github.com/sbcl/sbcl/blob/master/src/code/bignum.lisp
> - a custom bignum implementation, which is clearly faster compared to
> GMP (in the provided benchmark):
GMP is significantly faster than all other known bignum libraries.
Bignums are not considered essential for Emacs's performance, so the GMP
library is utilized in an inefficient fashion.
> perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
> 0.739 s
> (0.007 s for SBCL)
>
> 17.11% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
>
> 7.35% emacs libgmp.so.10.5.0 [.] __gmpz_add
>
> ^^ already >0.1 sec.
The subroutine actually performing arithmetic is in fact
mpn_add_n_coreisbr.
mpz_add and mpz_sizeinbase are ``mpz'' functions that perform memory
allocation, and our bignum functions frequently utilize mpz_sizeinbase
to ascertain whether a result can be represented as a fixnum. As such,
they don't constitute a fair comparison between the speed of the GMP
library itself and SBCL.
GMP provides low-level functions that place responsibility for memory
management and input verification in the hands of the programmer. These
are usually implemented in CPU-specific assembler, and are very fast.
That being said, they're not available within mini-gmp, and the primary
bottleneck is in fact mpz_sizeinbase.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance
2023-08-14 7:15 ` Ihor Radchenko
2023-08-14 7:50 ` Po Lu
@ 2023-08-15 14:28 ` Emanuel Berg
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-15 14:28 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> AFAIU, no conversion takes place between ``Elisp formats''
>> and GMP formats. Our bignums rely on GMP for all data
>> storage and memory allocation.
>
> Thanks for the clarification! So, GMP is not as fast as
> SBCL's implementation after all. SBCL uses
> https://github.com/sbcl/sbcl/blob/master/src/code/bignum.lisp
> - a custom bignum implementation, which is clearly faster
> compared to GMP (in the provided benchmark):
>
> perf record ~/Git/emacs/src/emacs -Q -batch -l /tmp/fib.eln
> 0.739 s
> (0.007 s for SBCL)
>
> 17.11% emacs libgmp.so.10.5.0 [.] __gmpz_sizeinbase
> 7.35% emacs libgmp.so.10.5.0 [.] __gmpz_add
>
> ^^ already >0.1 sec.
And we are not the only ones using GMP, right?
So maybe this issue in particular would be solved even more
broadly by a SBCL -> GMP transition ...
BTW, here are a bunch of new benchmarks from elisp-benchmarks
brought to CL. Ironically, as some of them were
from CL in the first place. But it is the way it goes.
https://dataswamp.org/~incal/cl/bench/
Several benchmarks are to Emacs specific to make sense (work)
anywhere else, but there are still a few left to do.
Not that in fib.cl there are now, also brought from
elisp-benchmarks, a bunch of new implementations to try.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-14 2:20 ` Emanuel Berg
2023-08-14 2:42 ` [PATCH] Re: Bignum performance Po Lu
@ 2023-08-14 7:20 ` Ihor Radchenko
1 sibling, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-14 7:20 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> And the method was: instead of reallocating new objects for
> bignums, we are no reusing existing allocations for new data?
Nope. Reusing existing allocations was already in place. I just
optimized how Elisp searches if there are any existing allocations that
can be reused.
>> Sure. Feel free to share such benchmark for Elisp vs. CL.
>> I only know the benchmark library for Elisp. No equivalent
>> comparable benchmark for CL.
>
> I'm working on it! This will be very interesting, for sure.
>
> The need for speed - but in a very methodical way ...
Yup. It is always better to give very specific benchmark than bluntly
claiming that Elisp is slow. Because detailed benchmark provides
concrete data on what might be improved (or not; but we can at least
discuss without degrading the discussion quality).
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-11 19:41 ` Ihor Radchenko
2023-08-11 19:50 ` Emanuel Berg
@ 2023-08-11 22:46 ` Emanuel Berg
2023-08-12 8:30 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 22:46 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> Are we talking a list of Emacs C functions executing with
>> the corresponding times they have been in execution in
>> a tree data structure? :O
>
> That's what GNU perf does - it is a sampling profiler in
> GNU/Linux. The Elisp equivalent is profiler.el, but it does
> not reveal underlying C functions.
Ah, we are compiling C in a special way to see this with the
tool later.
What one should do then is run it for 100 hours for 100 Emacs
users' arbitrary Emacs use, then we would see what everyone
was up to in the C part as well.
Some would say even slow C is pretty fast but as we just saw
even that can be improved a lot actually ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-11 22:46 ` Emanuel Berg
@ 2023-08-12 8:30 ` Ihor Radchenko
2023-08-12 16:22 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-12 8:30 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> What one should do then is run it for 100 hours for 100 Emacs
> users' arbitrary Emacs use, then we would see what everyone
> was up to in the C part as well.
There are known parts of Emacs C code that could see performance
optimization. But we need someone to actually go ahead and provide
patches.
For example, regexp search can see more optimizations (e.g. see
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=63225). As you can
imagine, it is rather commonly used in Emacs as an editor.
Another example is marker processing - Emacs re-adjusts all the markers
in buffer by altering each single marker in O(N_markers). As the number
of markers grow, we get a problem. See
https://yhetil.org/emacs-devel/jwvsfntduas.fsf-monnier+emacs@gnu.org,
for example (note that the new overlay implementation already provides
the necessary data structures).
etc etc.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-12 8:30 ` Ihor Radchenko
@ 2023-08-12 16:22 ` Emanuel Berg
2023-08-13 9:12 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 16:22 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> What one should do then is run it for 100 hours for 100
>> Emacs users' arbitrary Emacs use, then we would see what
>> everyone was up to in the C part as well.
>
> There are known parts of Emacs C code that could see
> performance optimization. But we need someone to actually go
> ahead and provide patches.
So to make Emacs faster, we need to:
1. Fix certain individual and identified areas in C that are
currently slow for known reasons of the properties of the
algorithms and/or data structure involved.
2. On a system level, find and change areas in C that have to
do with how the Lisp model is implemented and
upheld generally.
3. I don't know where the native/byte compiler would go from
there, it it depends how big changes are in step 2.
4. Actual and existing Elisp code doesn't have to be changed
a lot, and it would still be Elisp only it would be
compiled with the methods from the CL world and in
particular from SBCL.
5. This would not be a complete integration between Elisp and
CL, but since the methods, tools and supply chain from CL
would now be working for Elisp, it would be a big step
towards such a possible integration in the future,
if desired.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: [PATCH] Re: Bignum performance (was: Shrinking the C core)
2023-08-12 16:22 ` Emanuel Berg
@ 2023-08-13 9:12 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-13 9:12 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> 4. Actual and existing Elisp code doesn't have to be changed
> a lot, and it would still be Elisp only it would be
> compiled with the methods from the CL world and in
> particular from SBCL.
>
> 5. This would not be a complete integration between Elisp and
> CL, but since the methods, tools and supply chain from CL
> would now be working for Elisp, it would be a big step
> towards such a possible integration in the future,
> if desired.
I do not think that it is possible in practice. Elisp and CL internals
are different, and you cannot magically overcome this.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Bignum performance (was: Shrinking the C core)
2023-08-11 10:27 ` Bignum performance (was: Shrinking the C core) Ihor Radchenko
2023-08-11 12:10 ` Emanuel Berg
@ 2023-08-11 14:14 ` Mattias Engdegård
2023-08-11 18:09 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Mattias Engdegård @ 2023-08-11 14:14 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: emacs-devel
11 aug. 2023 kl. 12.27 skrev Ihor Radchenko <yantar92@posteo.net>:
> Most of the time is spent in (1) GC; (2) Creating bigint:
This is well known. The reason for bignums being in Elisp is because their very presence is helpful in several ways. They really only need to be correct; speed is a secondary concern (and GMP is actually overkill; mini-GMP is fine).
I'm not aware of any useful Elisp code that is bignum-intensive. The nearest is perhaps using Calc for certain kinds of algebraic operations but even there, bignums are rarely a significant performance factor.
Making the allocator and GC faster in general is useful since it benefits everything. Just speeding up bignums, not so much.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Bignum performance (was: Shrinking the C core)
2023-08-11 14:14 ` Mattias Engdegård
@ 2023-08-11 18:09 ` Emanuel Berg
0 siblings, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-11 18:09 UTC (permalink / raw)
To: emacs-devel
Mattias Engdegård wrote:
>> Most of the time is spent in (1) GC; (2) Creating bigint:
>
> This is well known.
Not by everyone :)
> I'm not aware of any useful Elisp code that is
> bignum-intensive. The nearest is perhaps using Calc for
> certain kinds of algebraic operations but even there,
> bignums are rarely a significant performance factor.
>
> Making the allocator and GC faster in general is useful
> since it benefits everything. Just speeding up bignums, not
> so much.
But we have the best editor in the world, and it is based on
Lisp. Shouldn't we have the best Lisp in the world as well?
It is not like we would kill the Elisp people, on the contrary
we want them. They would be assimilated ...
And maybe the CL people would come to Emacs as well?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 7:44 ` Eli Zaretskii
2023-08-10 21:54 ` Emanuel Berg
@ 2023-08-10 23:49 ` Eric S. Raymond
2023-08-11 0:03 ` Christopher Dimech
2023-08-11 7:03 ` Eli Zaretskii
1 sibling, 2 replies; 247+ messages in thread
From: Eric S. Raymond @ 2023-08-10 23:49 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: luangruo, emacs-devel
Eli Zaretskii <eliz@gnu.org>:
> What's more, Emacs is still a single-threaded Lisp machine, although
> in the last 10 years CPU power develops more and more in the direction
> of multiple cores and execution units, with single execution units
> being basically as fast (or as slow) today as they were a decade ago.
Yeah, I've been thinking hard about that single-threadedness in the
last couple of weeks. I have a design sketch in my head for a
re-partitioning of Emacs into a front-end/back-end pair communicating
via sockets, with the back end designed to handle multiple socket
sessions for collaborative editing. (No, this isn't my big secret idea,
it's something I think should be done *along with* my big secret idea.)
For this to work, a lot of what is now global state would need to be
captured into a structure associated with each socket session. I notice
that it's difficult to find an obviously correct cut line between what
the session structure should own and what still needs to be shared state;
like, *some* keymaps definitely need to be session and buffers still need
to be shared, but what about the buffer's mode set and mode-specific kemaps?
Or marks? Or overlays?
This is a difficult design problem because of some inherent features
of the Emacs Lisp language model. I did not fail to notice that those
same features would make exploiting concurrency difficult even in the
present single-user-only implementation. It is unclear what
could be done to fix this without significant language changes.
> And if these theoretical arguments don't convince you, then there are
> facts: the Emacs display engine, for example, was completely rewritten
> since the 1990s, and is significantly more expensive than the old one
> (because it lifts several of the gravest limitations of the old
> redisplay). Similarly with some other core parts and internals.
Are you seriously to trying to tell me that the display engine rewrite ate
*three orders of magnitude* in machine-speed gains? No sale. I have
some idea of the amount of talent on the devteam and I plain do not
believe y'all are that incompetent.
> We found this conclusion to be false in practice, at least in Emacs
> practice.
I'm not persuaded, because your causal account doesn't pass my smell
test. I think you're misdiagnosing the performance problems through
being too close to them. It would take actual benchmark figures to
persuade me that Lisp interpretive overhead is the actual culprit.
Your project, your choices. But I have a combination of experience
with the code going back nearly to its origins with an outside view of
its present strate, and I think you're seeing your own assumptions
about performance lag reflected back at you more than the reality.
> Please be more patient,
That *was* patient. I didn't aim for his head until the *second*
time he poked me. :-)
I'll stop trying to make preparatory changes. If I can allocate
enough bandwidth for the conversation, I may try on a couple of
hopefully thought-provoling design questions.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 23:49 ` Shrinking the C core Eric S. Raymond
@ 2023-08-11 0:03 ` Christopher Dimech
2023-08-11 8:24 ` Immanuel Litzroth
2023-08-11 7:03 ` Eli Zaretskii
1 sibling, 1 reply; 247+ messages in thread
From: Christopher Dimech @ 2023-08-11 0:03 UTC (permalink / raw)
To: esr; +Cc: Eli Zaretskii, luangruo, emacs-devel
> Sent: Friday, August 11, 2023 at 11:49 AM
> From: "Eric S. Raymond" <esr@thyrsus.com>
> To: "Eli Zaretskii" <eliz@gnu.org>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Subject: Re: Shrinking the C core
>
> Eli Zaretskii <eliz@gnu.org>:
> > What's more, Emacs is still a single-threaded Lisp machine, although
> > in the last 10 years CPU power develops more and more in the direction
> > of multiple cores and execution units, with single execution units
> > being basically as fast (or as slow) today as they were a decade ago.
>
> Yeah, I've been thinking hard about that single-threadedness in the
> last couple of weeks. I have a design sketch in my head for a
> re-partitioning of Emacs into a front-end/back-end pair communicating
> via sockets, with the back end designed to handle multiple socket
> sessions for collaborative editing. (No, this isn't my big secret idea,
> it's something I think should be done *along with* my big secret idea.)
>
> For this to work, a lot of what is now global state would need to be
> captured into a structure associated with each socket session. I notice
> that it's difficult to find an obviously correct cut line between what
> the session structure should own and what still needs to be shared state;
> like, *some* keymaps definitely need to be session and buffers still need
> to be shared, but what about the buffer's mode set and mode-specific kemaps?
> Or marks? Or overlays?
>
> This is a difficult design problem because of some inherent features
> of the Emacs Lisp language model. I did not fail to notice that those
> same features would make exploiting concurrency difficult even in the
> present single-user-only implementation. It is unclear what
> could be done to fix this without significant language changes.
>
> > And if these theoretical arguments don't convince you, then there are
> > facts: the Emacs display engine, for example, was completely rewritten
> > since the 1990s, and is significantly more expensive than the old one
> > (because it lifts several of the gravest limitations of the old
> > redisplay). Similarly with some other core parts and internals.
>
> Are you seriously to trying to tell me that the display engine rewrite ate
> *three orders of magnitude* in machine-speed gains? No sale. I have
> some idea of the amount of talent on the devteam and I plain do not
> believe y'all are that incompetent.
>
> > We found this conclusion to be false in practice, at least in Emacs
> > practice.
>
> I'm not persuaded, because your causal account doesn't pass my smell
> test. I think you're misdiagnosing the performance problems through
> being too close to them. It would take actual benchmark figures to
> persuade me that Lisp interpretive overhead is the actual culprit.
>
> Your project, your choices. But I have a combination of experience
> with the code going back nearly to its origins with an outside view of
> its present strate, and I think you're seeing your own assumptions
> about performance lag reflected back at you more than the reality.
>
> > Please be more patient,
>
> That *was* patient.
> I didn't aim for his head until the *second* time he poked me. :-)
Good you're not a general in a battlefield ! I don't have such rules of conduct.
Did you know that there are tribes in the Amazon River Basin who simply kill
you if they see you ?
> I'll stop trying to make preparatory changes. If I can allocate
> enough bandwidth for the conversation, I may try on a couple of
> hopefully thought-provoling design questions.
> --
> <a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
>
>
>
>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-11 0:03 ` Christopher Dimech
@ 2023-08-11 8:24 ` Immanuel Litzroth
0 siblings, 0 replies; 247+ messages in thread
From: Immanuel Litzroth @ 2023-08-11 8:24 UTC (permalink / raw)
To: Christopher Dimech; +Cc: esr, Eli Zaretskii, luangruo, emacs-devel
On Fri, Aug 11, 2023 at 2:04 AM Christopher Dimech <dimech@gmx.com> wrote:
>
>
>
>
> > Sent: Friday, August 11, 2023 at 11:49 AM
> > From: "Eric S. Raymond" <esr@thyrsus.com>
> > To: "Eli Zaretskii" <eliz@gnu.org>
> > Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> > Subject: Re: Shrinking the C core
> >
> > Eli Zaretskii <eliz@gnu.org>:
> > > What's more, Emacs is still a single-threaded Lisp machine, although
> > > in the last 10 years CPU power develops more and more in the direction
> > > of multiple cores and execution units, with single execution units
> > > being basically as fast (or as slow) today as they were a decade ago.
> >
> > Yeah, I've been thinking hard about that single-threadedness in the
> > last couple of weeks. I have a design sketch in my head for a
> > re-partitioning of Emacs into a front-end/back-end pair communicating
> > via sockets, with the back end designed to handle multiple socket
> > sessions for collaborative editing. (No, this isn't my big secret idea,
> > it's something I think should be done *along with* my big secret idea.)
> >
> > For this to work, a lot of what is now global state would need to be
> > captured into a structure associated with each socket session. I notice
> > that it's difficult to find an obviously correct cut line between what
> > the session structure should own and what still needs to be shared state;
> > like, *some* keymaps definitely need to be session and buffers still need
> > to be shared, but what about the buffer's mode set and mode-specific kemaps?
> > Or marks? Or overlays?
> >
> > This is a difficult design problem because of some inherent features
> > of the Emacs Lisp language model. I did not fail to notice that those
> > same features would make exploiting concurrency difficult even in the
> > present single-user-only implementation. It is unclear what
> > could be done to fix this without significant language changes.
> >
> > > And if these theoretical arguments don't convince you, then there are
> > > facts: the Emacs display engine, for example, was completely rewritten
> > > since the 1990s, and is significantly more expensive than the old one
> > > (because it lifts several of the gravest limitations of the old
> > > redisplay). Similarly with some other core parts and internals.
> >
> > Are you seriously to trying to tell me that the display engine rewrite ate
> > *three orders of magnitude* in machine-speed gains? No sale. I have
> > some idea of the amount of talent on the devteam and I plain do not
> > believe y'all are that incompetent.
> >
> > > We found this conclusion to be false in practice, at least in Emacs
> > > practice.
> >
> > I'm not persuaded, because your causal account doesn't pass my smell
> > test. I think you're misdiagnosing the performance problems through
> > being too close to them. It would take actual benchmark figures to
> > persuade me that Lisp interpretive overhead is the actual culprit.
> >
> > Your project, your choices. But I have a combination of experience
> > with the code going back nearly to its origins with an outside view of
> > its present strate, and I think you're seeing your own assumptions
> > about performance lag reflected back at you more than the reality.
> >
> > > Please be more patient,
> >
> > That *was* patient.
>
> > I didn't aim for his head until the *second* time he poked me. :-)
>
> Good you're not a general in a battlefield ! I don't have such rules of conduct.
> Did you know that there are tribes in the Amazon River Basin who simply kill
> you if they see you ?
How did those tribes get to know about Eric?
i
--
-- A man must either resolve to point out nothing new or to become a
slave to defend it. -- Sir Isaac Newton
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 23:49 ` Shrinking the C core Eric S. Raymond
2023-08-11 0:03 ` Christopher Dimech
@ 2023-08-11 7:03 ` Eli Zaretskii
2023-08-11 7:19 ` tomas
2023-08-11 10:57 ` Eli Zaretskii
1 sibling, 2 replies; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-11 7:03 UTC (permalink / raw)
To: esr; +Cc: luangruo, emacs-devel
> Date: Thu, 10 Aug 2023 19:49:49 -0400
> From: "Eric S. Raymond" <esr@thyrsus.com>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
>
> Eli Zaretskii <eliz@gnu.org>:
> > What's more, Emacs is still a single-threaded Lisp machine, although
> > in the last 10 years CPU power develops more and more in the direction
> > of multiple cores and execution units, with single execution units
> > being basically as fast (or as slow) today as they were a decade ago.
>
> This is a difficult design problem because of some inherent features
> of the Emacs Lisp language model. I did not fail to notice that those
> same features would make exploiting concurrency difficult even in the
> present single-user-only implementation. It is unclear what
> could be done to fix this without significant language changes.
This stuff was discussed lately in several threads on this list. And
yes, finding which parts of the global state to leave shared and which
to make private to threads is a large part of the issue. My personal
opinion is that introducing concurrency into Emacs will need redesign
of the internals, not just some changes. But that's me.
> > And if these theoretical arguments don't convince you, then there are
> > facts: the Emacs display engine, for example, was completely rewritten
> > since the 1990s, and is significantly more expensive than the old one
> > (because it lifts several of the gravest limitations of the old
> > redisplay). Similarly with some other core parts and internals.
>
> Are you seriously to trying to tell me that the display engine rewrite ate
> *three orders of magnitude* in machine-speed gains? No sale. I have
> some idea of the amount of talent on the devteam and I plain do not
> believe y'all are that incompetent.
First, I don't know where did you take the 3 orders of magnitude
figure. During 1990s, PCs generally had 100 to 200 MHz clocks, and
nowadays we have ~3.5 GHz clocks -- that's 1.5 order of magnitude, not
3. As we all know, chip clock speeds stalled around 2004; processing
power continues growing due to multiprocessing, but that doesn't help
Emacs, because Lisp mostly runs on a single execution unit.
Second, the new display engine uses up many more GUI (Xlib, Cairo
etc.) API calls than the old one did -- and that takes some
significant additional processing. Moreover, no amount of Emacs
devteam talent can do anything about the code quality and algorithms
in those libraries and components of the OS.
Third, the new display engine was not just a rewrite of the old
capabilities: it _added_ quite a lot of functionalities that were
either very hard to implement or plainly not possible with the old
one. These additional functionalities are nowadays used very widely,
and they do eat CPU power.
And finally, there are plain facts: users do complain about slow
operation, including during redisplay, in some (fortunately, usually
rare) situations.
As an example perhaps closer to your heart: certain VC-related
operations are slow enough (hundreds of milliseconds to seconds, and
sometimes minutes!) to annoy users. VCS repositories can be very
large these days, and that could be part of the problem. We just had
a long discussion here about the fastest possible way of collecting
all the files in a deep directory tree, see bug#64735
(https://debbugs.gnu.org/cgi/bugreport.cgi?bug=64735). The somewhat
surprising findings there aside, one conclusion that stands out is the
time spent in GC takes a significant fraction of the elapsed time, and
that flies in the face of moving code from C to Lisp.
So even if the theoretical considerations don't convince you, the
facts should: we do have performance problems in Emacs, and they are
real enough for us to attempt to solve them by introducing non-trivial
complexity, such as the native-compilation feature. We wouldn't be
doing that (in fact, IMO it would have been madness to do that) unless
the performance of Lisp programs was not a real issue.
> > We found this conclusion to be false in practice, at least in Emacs
> > practice.
>
> I'm not persuaded, because your causal account doesn't pass my smell
> test. I think you're misdiagnosing the performance problems through
> being too close to them. It would take actual benchmark figures to
> persuade me that Lisp interpretive overhead is the actual culprit.
People did benchmarks, you can find them in the archives. When the
native-compilation was considered for inclusion, we did benchmark some
representative code to assess the gains. The above-mentioned bug
discussion about traversing directories also includes benchmarks.
If you want to sample this further, try benchmarking shr.el when it
performs layout of HTML with variable-pitch fonts. It basically does
what the display engine does all the time in C, but you can see how
much slower is this in Lisp, even after several iterations where we
looked for and found the fastest possible ways of doing the job in
Lisp.
You might be surprised and even astonished by these facts, to the
degree that you are reluctant to accept them, but they are facts
nonetheless.
> > Please be more patient,
>
> That *was* patient. I didn't aim for his head until the *second*
> time he poked me. :-)
Well, then please be *more* patient. People here are generally
well-meant, and certainly have the Emacs's best interests in their
minds, so shooting them too early is not the best idea, to put it
mildly.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-11 7:03 ` Eli Zaretskii
@ 2023-08-11 7:19 ` tomas
2023-08-11 10:57 ` Eli Zaretskii
1 sibling, 0 replies; 247+ messages in thread
From: tomas @ 2023-08-11 7:19 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: esr, luangruo, emacs-devel
[-- Attachment #1: Type: text/plain, Size: 862 bytes --]
On Fri, Aug 11, 2023 at 10:03:49AM +0300, Eli Zaretskii wrote:
[...]
> This stuff was discussed lately in several threads on this list. And
> yes, finding which parts of the global state to leave shared and which
> to make private to threads is a large part of the issue. My personal
> opinion is that introducing concurrency into Emacs will need redesign
> of the internals, not just some changes. But that's me.
Not only you -- I do agree thoroughly. The hard part is that most of
Emacs isn't aware that things can happen behind their respective backs.
Providing the low level mechanism is just putting the can opener to
Pandora's box: dealing with what comes out is definitely the more
interesting part :-)
(Don't get me wrong: the metaphor I used might imply I don't think it's
desirable. Quite on the contrary).
Cheers
--
t
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-11 7:03 ` Eli Zaretskii
2023-08-11 7:19 ` tomas
@ 2023-08-11 10:57 ` Eli Zaretskii
1 sibling, 0 replies; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-11 10:57 UTC (permalink / raw)
To: esr; +Cc: luangruo, emacs-devel
> Date: Fri, 11 Aug 2023 10:03:49 +0300
> From: Eli Zaretskii <eliz@gnu.org>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
>
> And finally, there are plain facts: users do complain about slow
> operation, including during redisplay, in some (fortunately, usually
> rare) situations.
Btw, another aspect of this is user expectations: where we previously
were accustomed to the fact that listing files in a directory takes
some perceptible time, we are now "spoiled rotten" by the speed of our
CPUs and filesystems. So the overhead incurred by Emacs-specific
processing, like reading from subprocesses, consing Lisp objects, and
GC, which 20 years ago would be insignificant, is significant now, and
users pay attention.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 1:19 ` Eric S. Raymond
` (3 preceding siblings ...)
2023-08-10 7:44 ` Eli Zaretskii
@ 2023-08-10 11:28 ` Dmitry Gutov
2023-08-10 21:26 ` Eric S. Raymond
2023-08-12 2:46 ` Richard Stallman
4 siblings, 2 replies; 247+ messages in thread
From: Dmitry Gutov @ 2023-08-10 11:28 UTC (permalink / raw)
To: esr, Po Lu; +Cc: emacs-devel
On 10/08/2023 04:19, Eric S. Raymond wrote:
> basic considerations of clocks per second would
> predict it to run a*dead minimum* of two orders of magnitude faster
> than the Emacs of, say, 1990.
In addition to the examples made by others, I'll say that the sizes of
software projects have increased from 1990 as well. So if you have a
Lisp routine that simply enumerates the files in one project, it has to
do proportionally more work.
> And 1990 Emacs was already way fast enough for the human eye and
> brain, which can't even register interface lag of less than 0.17
> seconds (look up the story of Jef Raskin and how he exploited this
> psychophysical fact in the design of the Canon Cat sometime; it's very
> instructive). The human auditory system can perceive finer timeslices,
> down to about 0.02s in skilled musicians, but we're not using elisp
> for audio signal processing.
I've had to expend significant effort on many occasions to keep various
command execution times below 0.17, or 0.02, or etc.
Which is to say, while I'm very much in favor of the "lean core" concept
myself, we should accompany far-reaching changes like that with
appropriate benchmarking.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 11:28 ` Dmitry Gutov
@ 2023-08-10 21:26 ` Eric S. Raymond
2023-08-12 2:46 ` Richard Stallman
1 sibling, 0 replies; 247+ messages in thread
From: Eric S. Raymond @ 2023-08-10 21:26 UTC (permalink / raw)
To: Dmitry Gutov; +Cc: Po Lu, emacs-devel
Dmitry Gutov <dmitry@gutov.dev>:
> Which is to say, while I'm very much in favor of the "lean core" concept
> myself, we should accompany far-reaching changes like that with appropriate
> benchmarking.
Fair enough!
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-10 11:28 ` Dmitry Gutov
2023-08-10 21:26 ` Eric S. Raymond
@ 2023-08-12 2:46 ` Richard Stallman
2023-08-12 3:22 ` Emanuel Berg
2023-08-12 3:28 ` Christopher Dimech
1 sibling, 2 replies; 247+ messages in thread
From: Richard Stallman @ 2023-08-12 2:46 UTC (permalink / raw)
To: Dmitry Gutov; +Cc: esr, luangruo, emacs-devel
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
There are occasiona when it is useful, for added flexibility,
to move some function from C to Lisp. However, stability
is an important goal for Emacs, so we should not even try
to move large amounts of code to Lisp just for the sake
of moving code to Lisp.
The rate at whic we have added features already causes significant
instability.
--
Dr Richard Stallman (https://stallman.org)
Chief GNUisance of the GNU Project (https://gnu.org)
Founder, Free Software Foundation (https://fsf.org)
Internet Hall-of-Famer (https://internethalloffame.org)
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 2:46 ` Richard Stallman
@ 2023-08-12 3:22 ` Emanuel Berg
2023-08-12 8:33 ` Ihor Radchenko
2023-08-12 18:32 ` tomas
2023-08-12 3:28 ` Christopher Dimech
1 sibling, 2 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 3:22 UTC (permalink / raw)
To: emacs-devel
Richard Stallman wrote:
> There are occasiona when it is useful, for added
> flexibility, to move some function from C to Lisp. However,
> stability is an important goal for Emacs, so we should not
> even try to move large amounts of code to Lisp just for the
> sake of moving code to Lisp.
It is just cool that Emacs is written in two languages, it is
like to continents or something connected by bridges
overlapping certain areas.
Maybe we can use SBCL to compile Elisp, and after that
integrate it into Emacs so you would be able to run all old
Elisp without changing the code, only now it would in fact be
Common Lisp implementing Elisp, i.e. the opposite of our
cl-lib (`cl-loop' etc).
Maybe some normalization efforts would have to be done to the
syntax here and there as preparatory steps ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 3:22 ` Emanuel Berg
@ 2023-08-12 8:33 ` Ihor Radchenko
2023-08-12 15:58 ` Emanuel Berg
2023-08-12 18:32 ` tomas
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-12 8:33 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> Maybe we can use SBCL to compile Elisp, and after that
> integrate it into Emacs so you would be able to run all old
> Elisp without changing the code, only now it would in fact be
> Common Lisp implementing Elisp, i.e. the opposite of our
> cl-lib (`cl-loop' etc).
You would at least need to convert between internal object
representation in Elisp and CL. Not to mention many other non-obvious
problems arising from subtle differences in the function behavior.
In principle, Emacs has support for external modules. But it is not
widely used.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 8:33 ` Ihor Radchenko
@ 2023-08-12 15:58 ` Emanuel Berg
2023-08-13 9:13 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 15:58 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> Maybe we can use SBCL to compile Elisp, and after that
>> integrate it into Emacs so you would be able to run all old
>> Elisp without changing the code, only now it would in fact
>> be Common Lisp implementing Elisp, i.e. the opposite of our
>> cl-lib (`cl-loop' etc).
>
> You would at least need to convert between internal object
> representation in Elisp and CL.
But why is CL so much faster to begin with?
> Not to mention many other non-obvious problems arising from
> subtle differences in the function behavior.
Maybe one can fork SBCL into SBCL-E first and adapt it from
there since everything that involves changing existing Elisp
code, be it syntax or semantics, would be very impractical.
And we are also not unhappy with Elisp as a langauge, we just
want it to be faster, so what we need to do is discover and
extract the secrets of compiling Lisp into really fast
software, which we now have identified as being held by
SBCL ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 15:58 ` Emanuel Berg
@ 2023-08-13 9:13 ` Ihor Radchenko
2023-08-13 9:55 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-13 9:13 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> You would at least need to convert between internal object
>> representation in Elisp and CL.
>
> But why is CL so much faster to begin with?
You did not yet show that CL is "much faster". Just that bignum
implementation in CL is much faster. And bignum performance is not
something that matters in practice.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-13 9:13 ` Ihor Radchenko
@ 2023-08-13 9:55 ` Emanuel Berg
2023-08-13 10:23 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-13 9:55 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>>> You would at least need to convert between internal object
>>> representation in Elisp and CL.
>>
>> But why is CL so much faster to begin with?
>
> You did not yet show that CL is "much faster".
On the contrary, it is evident.
> Just that bignum implementation in CL is much faster.
> And bignum performance is not something that matters
> in practice.
Do we have a set of benchmarks in Elisp that everyone agrees
are good, and that can output data easily to show the results?
Didn't you do that with an ASCII table in a previous post?
Maybe I can use that source with minimal modifications to get
them to run in CL, so we can compare more broadly, but also in
an agreed-upon way. So it will not be about what really
matters or what people really use etc, it will only be about
performing as good as possible on those benchmarks, without
cheating obviously ;)
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-13 9:55 ` Emanuel Berg
@ 2023-08-13 10:23 ` Ihor Radchenko
2023-08-13 20:55 ` Emanuel Berg
2023-08-14 0:13 ` Emanuel Berg
0 siblings, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-13 10:23 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> Do we have a set of benchmarks in Elisp that everyone agrees
> are good, and that can output data easily to show the results?
> Didn't you do that with an ASCII table in a previous post?
>
> Maybe I can use that source with minimal modifications to get
> them to run in CL, so we can compare more broadly, but also in
> an agreed-upon way. So it will not be about what really
> matters or what people really use etc, it will only be about
> performing as good as possible on those benchmarks, without
> cheating obviously ;)
See https://elpa.gnu.org/packages/elisp-benchmarks.html
These are the benchmarks we used during the discussion of native-comp
support.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-13 10:23 ` Ihor Radchenko
@ 2023-08-13 20:55 ` Emanuel Berg
2023-08-14 0:13 ` Emanuel Berg
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-13 20:55 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> Do we have a set of benchmarks in Elisp that everyone
>> agrees are good, and that can output data easily to show
>> the results? Didn't you do that with an ASCII table in
>> a previous post?
>>
>> Maybe I can use that source with minimal modifications to
>> get them to run in CL, so we can compare more broadly, but
>> also in an agreed-upon way. So it will not be about what
>> really matters or what people really use etc, it will only
>> be about performing as good as possible on those
>> benchmarks, without cheating obviously ;)
>
> See https://elpa.gnu.org/packages/elisp-benchmarks.html
> These are the benchmarks we used during the discussion of
> native-comp support.
OK, I'm on it! I get back to you all, thanks for
the discussion.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-13 10:23 ` Ihor Radchenko
2023-08-13 20:55 ` Emanuel Berg
@ 2023-08-14 0:13 ` Emanuel Berg
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-14 0:13 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> Do we have a set of benchmarks in Elisp that everyone
>> agrees are good, and that can output data easily to show
>> the results? Didn't you do that with an ASCII table in
>> a previous post?
>>
>> Maybe I can use that source with minimal modifications to
>> get them to run in CL, so we can compare more broadly, but
>> also in an agreed-upon way. So it will not be about what
>> really matters or what people really use etc, it will only
>> be about performing as good as possible on those
>> benchmarks, without cheating obviously ;)
>
> See https://elpa.gnu.org/packages/elisp-benchmarks.html
> These are the benchmarks we used during the discussion of
> native-comp support.
Indeed, I see now in comments to my code that I got the my
initial fib.el [1] from elisp-benchmarks ...
But: 2 benchmarks implemented in CL! [bubble.cl yanked in full
last]
But this is just a sneak peak, not even a beta release, just
thought it could be interesting to see something tangible
after all this theorizing ...
https://dataswamp.org/~incal/cl/bench/bubble.cl
https://dataswamp.org/~incal/cl/bench/fib.cl
PS. Same old error message with Slime BTW, every time I start
"slime-set-connection-info: Args out of range: 0". But then
it seems to work fine. I saw this error message before,
don't remember how I solved it then. It is the MELPA
slime-20230730.1734, or 2.28 according to `slime-version'.
PPS. No C-u M-x slime-version RET BTW ...
[1] https://dataswamp.org/~incal/emacs-init/fib.el
;; this file:
;; https://dataswamp.org/~incal/cl/bench/bubble.cl
;;
;; original Elisp source:
;; elisp-benchmarks
(load "~/public_html/cl/bench/timing.cl")
(let*((bubble-len 1000)
(bubble-lst (mapcar #'random
(make-list bubble-len
:initial-element most-positive-fixnum) )))
(defun bubble (lst)
(declare (optimize speed (safety 0) (debug 0)))
(let ((i (length lst)))
(loop while (< 1 i) do
(let ((b lst))
(loop while (cdr b) do
(when (< (cadr b) (car b))
(rplaca b (prog1
(cadr b)
(rplacd b (cons (car b) (cddr b))) )))
(setq b (cdr b))))
(decf i) )
lst) )
(defun bubble-entry ()
(loop repeat 100
for l = (copy-list bubble-lst)
do (bubble l) ))
)
(timing (bubble-entry))
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 3:22 ` Emanuel Berg
2023-08-12 8:33 ` Ihor Radchenko
@ 2023-08-12 18:32 ` tomas
2023-08-12 22:08 ` Emanuel Berg
2023-08-12 23:09 ` Emanuel Berg
1 sibling, 2 replies; 247+ messages in thread
From: tomas @ 2023-08-12 18:32 UTC (permalink / raw)
To: emacs-devel
[-- Attachment #1: Type: text/plain, Size: 235 bytes --]
On Sat, Aug 12, 2023 at 05:22:45AM +0200, Emanuel Berg wrote:
> Maybe we can use SBCL to compile Elisp, and after that
> integrate it into Emacs [...]
- https://www.cliki.net/CL-Emacs
- https://xkcd.com/927/
Cheers
--
t
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 18:32 ` tomas
@ 2023-08-12 22:08 ` Emanuel Berg
2023-08-12 23:09 ` Emanuel Berg
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 22:08 UTC (permalink / raw)
To: emacs-devel
tomas wrote:
>> Maybe we can use SBCL to compile Elisp, and after that
>> integrate it into Emacs [...]
>
> - https://www.cliki.net/CL-Emacs
It says:
Also includes an elisp emulation package from
Ingvar Mattsson: note that this is not the same as the CLOCC
package below [...]
CLOCC contains a package (elisp.lisp) that implements some
part of elisp in CL.Also includes an elisp emulation package
from Ingvar Mattsson: note that this is not the same as the
CLOCC package below
Sounds cool (probably a cool guy, Ingvar Mattson BTW) however
changing software altogether because Elisp is slow, maybe
that's to burn down the house to kill the rats.
I am aware of Emacs forks as well as other implementations
altogether but then the problem arises everyone still wants to
use familiar software, e.g. Gnus, Emacs-w3m, ERC ...
But maybe one could integrate parts of that solution into
Emacs to have the cake and eat it as well. Is the CL in
CL-Emacs as fast as compiled CL from SBCL?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 18:32 ` tomas
2023-08-12 22:08 ` Emanuel Berg
@ 2023-08-12 23:09 ` Emanuel Berg
2023-08-13 5:50 ` tomas
` (2 more replies)
1 sibling, 3 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 23:09 UTC (permalink / raw)
To: emacs-devel
tomas wrote:
> https://xkcd.com/927/
HOW STANDARDS PROLIFERATE (See: A C chargers, character
encodings, instant messaging, etc.) SITUATION: There are 14
competing standards. Geek: 14?! Ridiculous! We need to
develop one universal standard that covers everyone's use
cases. Fellow Geek: Yeah! Soon: SITUATION: There are 15
competing standards. {{Title text: Fortunately, the charging
one has been solved now that we've all standardized on
mini-USB. Or is it micro-USB? Shit.}}
Okay, but here it isn't about joining the CL standard, it is
the situation that we have "the Lisp editor" yet our Lisp is
much slower than other people's Lisp, and for no good reason
what I can understand as Emacs is C, and SBCL is C. What's the
difference, why is one so much faster than the other?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 23:09 ` Emanuel Berg
@ 2023-08-13 5:50 ` tomas
2023-08-13 8:38 ` Emanuel Berg
2023-08-13 15:54 ` [External] : " Drew Adams
2023-08-13 8:00 ` Andreas Schwab
2023-08-14 2:36 ` Richard Stallman
2 siblings, 2 replies; 247+ messages in thread
From: tomas @ 2023-08-13 5:50 UTC (permalink / raw)
To: emacs-devel
[-- Attachment #1: Type: text/plain, Size: 1060 bytes --]
On Sun, Aug 13, 2023 at 01:09:38AM +0200, Emanuel Berg wrote:
> tomas wrote:
>
> > https://xkcd.com/927/
Try a bit of lateral thinking: what if, before embarking into
a "let's do everything new, the old sucks" kind of thing those
proposing it would, at least, have a look at the attempts made
in this direction, and, you know, try to learn about why no
one of them took over the house?
This might yield a more interesting attempt.
As I see it, the main challenge for an Emacs maintainer isn't
that it is software, nor that it is a big, complex piece of
software. Rather, that its community is huge and diverse. Folks
are using it in extremely different ways (in part due to the
project's age), and moving something will break one usage dating
back to 1997 or something.
Still moving forward is a little wonder, and I'm genuinely in
awe of Eli's job (although I'm not happy about each and every
of his decisions, but I think that'll happen to everyone, due
to the situation sketched above and is thus part of it).
Cheers
--
t
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-13 5:50 ` tomas
@ 2023-08-13 8:38 ` Emanuel Berg
2023-08-13 15:54 ` [External] : " Drew Adams
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-13 8:38 UTC (permalink / raw)
To: emacs-devel
tomas wrote:
> As I see it, the main challenge for an Emacs maintainer
> isn't that it is software, nor that it is a big, complex
> piece of software. Rather, that its community is huge and
> diverse. Folks are using it in extremely different ways (in
> part due to the project's age), and moving something will
> break one usage dating back to 1997 or something.
>
> Still moving forward is a little wonder, and I'm genuinely
> in awe of Eli's job (although I'm not happy about each and
> every of his decisions, but I think that'll happen to
> everyone, due to the situation sketched above and is thus
> part of it).
Still, no good reason(s) what I can see why Elisp is so much
slower than Common Lisp ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* RE: [External] : Re: Shrinking the C core
2023-08-13 5:50 ` tomas
2023-08-13 8:38 ` Emanuel Berg
@ 2023-08-13 15:54 ` Drew Adams
1 sibling, 0 replies; 247+ messages in thread
From: Drew Adams @ 2023-08-13 15:54 UTC (permalink / raw)
To: tomas@tuxteam.de, emacs-devel@gnu.org
> Still moving forward is a little wonder,
> and I'm genuinely in awe of Eli's job
> (although I'm not happy about each and
> every [one] of his decisions, but I think
> that'll happen to everyone...
+1.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 23:09 ` Emanuel Berg
2023-08-13 5:50 ` tomas
@ 2023-08-13 8:00 ` Andreas Schwab
2023-08-13 9:21 ` Emanuel Berg
2023-08-14 2:36 ` Richard Stallman
2 siblings, 1 reply; 247+ messages in thread
From: Andreas Schwab @ 2023-08-13 8:00 UTC (permalink / raw)
To: emacs-devel
On Aug 13 2023, Emanuel Berg wrote:
> Okay, but here it isn't about joining the CL standard, it is
> the situation that we have "the Lisp editor" yet our Lisp is
> much slower than other people's Lisp, and for no good reason
> what I can understand as Emacs is C, and SBCL is C.
But SBCL is not portable.
--
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-13 8:00 ` Andreas Schwab
@ 2023-08-13 9:21 ` Emanuel Berg
2023-08-14 7:27 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-13 9:21 UTC (permalink / raw)
To: emacs-devel
Andreas Schwab wrote:
>> Okay, but here it isn't about joining the CL standard, it
>> is the situation that we have "the Lisp editor" yet our
>> Lisp is much slower than other people's Lisp, and for no
>> good reason what I can understand as Emacs is C, and SBCL
>> is C.
>
> But SBCL is not portable.
Step one would be to identify why SBCL is so much faster.
Say it is faster for reasons A, B, C and D. Surely A, B, C and
D are not all unportable features, so one would first try add
the same thing to the Elisp model.
If that fails maybe one would consider ECL or some other
faster, yet portable solution ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-13 9:21 ` Emanuel Berg
@ 2023-08-14 7:27 ` Alfred M. Szmidt
2023-08-14 7:36 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-14 7:27 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Step one would be to identify why SBCL is so much faster.
The short reason why SBCL is faster is that SBCL uses lots of type
checking and interference, which allows it to figure out optimizations
better. And then using assembly to implement such optimizations. CL
code compiled without DECLARE/DECLAIM is generally slow, but can be
made fast if you are very careful, and lucky that the things you need
are implemented as VOPs -- which are entierly unportable.
It also has the luxury of using a language that is set more or less in
stone, that doesn't change constantly. Which means the developers can
spend quite a bit more time on optimizing what they have.
If that fails maybe one would consider ECL or some other
faster, yet portable solution ...
"Porting" Emacs to CL is not just slap ECL or some CL implementation
on top. Adding type checking of the sort that SBCL does, would be
possible but ALOT of work. It would most definitly require a entierly
new VM, and with that comes dragons.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-14 7:27 ` Alfred M. Szmidt
@ 2023-08-14 7:36 ` Ihor Radchenko
2023-08-14 7:50 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-14 7:36 UTC (permalink / raw)
To: Alfred M. Szmidt, Andrea Corallo; +Cc: Emanuel Berg, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> Step one would be to identify why SBCL is so much faster.
>
> The short reason why SBCL is faster is that SBCL uses lots of type
> checking and interference, which allows it to figure out optimizations
> better. And then using assembly to implement such optimizations. CL
> code compiled without DECLARE/DECLAIM is generally slow, but can be
> made fast if you are very careful, and lucky that the things you need
> are implemented as VOPs -- which are entierly unportable.
AFAIK, Elisp is full of type checks (all these BUFFERP/CHECK_STRING in
C code). Also, AFAIR, native-comp is making use of some of the type
checks as well, allowing some extra optimizations.
So, there might be some room for improvement after all.
Do you have some references detailing what SBCL does?
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-14 7:36 ` Ihor Radchenko
@ 2023-08-14 7:50 ` Alfred M. Szmidt
2023-08-15 22:57 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-14 7:50 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: akrl, incal, emacs-devel
AFAIK, Elisp is full of type checks (all these BUFFERP/CHECK_STRING in
C code). Also, AFAIR, native-comp is making use of some of the type
checks as well, allowing some extra optimizations.
SBCL does far more detailed checks than that. Which is why it makes
it very unportable, since the base case is trivial, but the optimized
one is not. Check for example FLOOR in sbcl/src/code/numbers.lisp ,
and compare it to FLOOR in Emacs Lisp.
SBCL has many luxuries that Emacs does not have.
So, there might be some room for improvement after all.
There is always a door for that ... but someone needs to open it.
Do you have some references detailing what SBCL does?
http://www.sbcl.org/sbcl-internals/ and the source code.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-14 7:50 ` Alfred M. Szmidt
@ 2023-08-15 22:57 ` Emanuel Berg
2023-08-16 10:27 ` Ihor Radchenko
2023-08-18 8:35 ` Shrinking the C core Aurélien Aptel
0 siblings, 2 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-15 22:57 UTC (permalink / raw)
To: emacs-devel
Alfred M. Szmidt wrote:
> one is not. Check for example FLOOR in
> sbcl/src/code/numbers.lisp , and compare it to FLOOR in
> Emacs Lisp.
Are we talking `floor' as in (floor 3.1415) ; 3 ?
How do we make a benchmark for that? Run it 1000+ times for
random floats? But maybe SBCL has faster random as well!
Actually, even I have a better random than Elisp:
https://dataswamp.org/~incal/emacs-init/random-urandom/
It is more random than `random', using the Linux random.
But I didn't compare them in terms of speed. It is a so called
dynamic module, that is, not built-in but still written in C.
Anyway, I'd be happy to add something `floor' to the
benchmarks, for sure!
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-15 22:57 ` Emanuel Berg
@ 2023-08-16 10:27 ` Ihor Radchenko
2023-08-19 13:29 ` Emanuel Berg
2023-08-18 8:35 ` Shrinking the C core Aurélien Aptel
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-16 10:27 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> Are we talking `floor' as in (floor 3.1415) ; 3 ?
>
> How do we make a benchmark for that? Run it 1000+ times for
> random floats? But maybe SBCL has faster random as well!
You can generate random number sequence first, excluding it from the
benchmark, and then benchmark mapping #'floor on the already generated
sequence. Ideally, use the same sequence for both Elisp and SBCL.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-16 10:27 ` Ihor Radchenko
@ 2023-08-19 13:29 ` Emanuel Berg
2023-08-20 5:09 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-19 13:29 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> Are we talking `floor' as in (floor 3.1415) ; 3 ?
>>
>> How do we make a benchmark for that? Run it 1000+ times for
>> random floats? But maybe SBCL has faster random as well!
>
> You can generate random number sequence first, excluding it
> from the benchmark, and then benchmark mapping #'floor on
> the already generated sequence. Ideally, use the same
> sequence for both Elisp and SBCL.
You are right, good idea. But maybe it is already known why
floor is slower in Elisp than SBCL?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-19 13:29 ` Emanuel Berg
@ 2023-08-20 5:09 ` Ihor Radchenko
2023-08-20 6:51 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 5:09 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> You can generate random number sequence first, excluding it
>> from the benchmark, and then benchmark mapping #'floor on
>> the already generated sequence. Ideally, use the same
>> sequence for both Elisp and SBCL.
>
> You are right, good idea. But maybe it is already known why
> floor is slower in Elisp than SBCL?
The discussion about floor started from Alfred using `floor' as an
example where CL uses system-dependent optimizations and thus being much
faster. https://yhetil.org/emacs-devel/E1qVSLD-00079S-Gg@fencepost.gnu.org/
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 5:09 ` Ihor Radchenko
@ 2023-08-20 6:51 ` Emanuel Berg
2023-08-20 7:14 ` Ihor Radchenko
2023-08-20 21:51 ` [External] : " Drew Adams
0 siblings, 2 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-20 6:51 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>>> You can generate random number sequence first, excluding
>>> it from the benchmark, and then benchmark mapping #'floor
>>> on the already generated sequence. Ideally, use the same
>>> sequence for both Elisp and SBCL.
>>
>> You are right, good idea. But maybe it is already known why
>> floor is slower in Elisp than SBCL?
>
> The discussion about floor started from Alfred using `floor'
> as an example where CL uses system-dependent optimizations
> and thus being much faster.
So the answer to the question, Why is SBCL faster?
is "optimizations". And the answer to the question, Why don't
we have those optimizations? is "they are not portable"?
But isn't that what we do already with compilation and in
particular native compilation, why can't that add
optimizations for the native system?
Some commands/Elisp on that (compilation and native
compilation) as a side note, maybe someone finds them
entertaining:
https://dataswamp.org/~incal/conf/.zsh/install-emacs
https://dataswamp.org/~incal/emacs-init/native.el
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 6:51 ` Emanuel Berg
@ 2023-08-20 7:14 ` Ihor Radchenko
2023-08-20 7:52 ` Emanuel Berg
2023-08-20 8:28 ` Alfred M. Szmidt
2023-08-20 21:51 ` [External] : " Drew Adams
1 sibling, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 7:14 UTC (permalink / raw)
To: Emanuel Berg, Alfred M. Szmidt; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> The discussion about floor started from Alfred using `floor'
>> as an example where CL uses system-dependent optimizations
>> and thus being much faster.
>
> So the answer to the question, Why is SBCL faster?
> is "optimizations". And the answer to the question, Why don't
> we have those optimizations? is "they are not portable"?
Looking at
https://github.com/sbcl/sbcl/blob/master/src/code/numbers.lisp#L390,
they employ certain x86-64-, x86-, ppc64-specific optimizations.
Althrough, Elisp's rounding_driver is not too bad actually. It also does
shortcuts depending on the argument type.
AFAIU, the main difference in SBCL vs. Elisp is that Elisp type checks
are often called repetitively on the same values. Even though the checks
are rather fast (typecheck is usually just a single xor + equal
operation), repetitive calls do add up.
And even this is not a definitive answer. I do not think that we can
point out a single reason why SBCL is faster. I am not even sure if SBCL
is _always_ faster.
> But isn't that what we do already with compilation and in
> particular native compilation, why can't that add
> optimizations for the native system?
If we talk about type checking, Elisp uses dynamic typing and
compilation cannot do much about it. Native compilation also does not
touch C subroutines - the place where typechecks are performed.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 7:14 ` Ihor Radchenko
@ 2023-08-20 7:52 ` Emanuel Berg
2023-08-20 13:01 ` tomas
2023-08-20 8:28 ` Alfred M. Szmidt
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-20 7:52 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
> If we talk about type checking, Elisp uses dynamic typing
> and compilation cannot do much about it. Native compilation
> also does not touch C subroutines - the place where
> typechecks are performed.
So our typechecks are not optimized, as we can native compile
Elisp but not C.
Worse, with dynamic typing they have to be used repeatedly and
during execution /o\
Point taken - but we are compiling the C subroutines so in
theory optimization of typechecks could happen there, and if
we use them more often and at a execution time, it would
actually be a bigger win for us than for them, i.e. SBCL, to
have them?
I guess SBCL just always have to be best at eeeverything ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 7:52 ` Emanuel Berg
@ 2023-08-20 13:01 ` tomas
2023-08-20 13:12 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: tomas @ 2023-08-20 13:01 UTC (permalink / raw)
To: emacs-devel
[-- Attachment #1: Type: text/plain, Size: 1650 bytes --]
On Sun, Aug 20, 2023 at 09:52:17AM +0200, Emanuel Berg wrote:
> Ihor Radchenko wrote:
>
> > If we talk about type checking, Elisp uses dynamic typing
> > and compilation cannot do much about it. Native compilation
> > also does not touch C subroutines - the place where
> > typechecks are performed.
>
> So our typechecks are not optimized, as we can native compile
> Elisp but not C.
>
> Worse, with dynamic typing they have to be used repeatedly and
> during execution /o\
>
> Point taken - but we are compiling the C subroutines so in
> theory optimization of typechecks could happen there [...]
I humbly suggest you read up a bit on compilation. Those
type checks happen at compile time for a reason: the very
expensive data flow analysis provides the compiler with
information which is quite difficult to obtain later.
If you /know/ that some x will always be a nonnegative
integer (because every path leading to your execution
node either sets it to zero or increments it, for example),
you can do away with that test and else branch:
(if (> x 0)
...
...)
but for that you have to take your program painstakingly
apart into basic blocks, take note of what leads to where
and think hard about which variables are munged where.
That's why modern browsers come with more than one compiler:
the price of that painstaking process is so high that you
want to start quick and dirty (and slow) and focus on those
things which really need attention.
See [1] for a discussion in Guile's context.
Cheers
[1] https://wingolog.org/archives/2020/06/03/a-baseline-compiler-for-guile
--
t
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 13:01 ` tomas
@ 2023-08-20 13:12 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 13:12 UTC (permalink / raw)
To: tomas; +Cc: emacs-devel
<tomas@tuxteam.de> writes:
> I humbly suggest you read up a bit on compilation. Those
> type checks happen at compile time for a reason: the very
> expensive data flow analysis provides the compiler with
> information which is quite difficult to obtain later.
> ...
> but for that you have to take your program painstakingly
> apart into basic blocks, take note of what leads to where
> and think hard about which variables are munged where.
Correct me if I am wrong, but don't we already make use of the extensive
static analysis when native-compiling Elisp? AFAIU, the main problem
with typechecks is that native-comp cannot do anything about
subroutines, where a number of repetitive typechecks are performed. So,
subroutine code cannot make use of the information provided by
native-comp static analysis performed by GCC.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 7:14 ` Ihor Radchenko
2023-08-20 7:52 ` Emanuel Berg
@ 2023-08-20 8:28 ` Alfred M. Szmidt
2023-08-20 9:29 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 8:28 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: incal, emacs-devel
And even this is not a definitive answer. I do not think that we can
point out a single reason why SBCL is faster. I am not even sure if SBCL
is _always_ faster.
Always faster than what? What are you comparing? SBCL is a compiler,
Emacs is more than that.
It should be quite obvious why SBCL is faster than the Emacs Lisp VM
(or even native). Just look at this call to (car "foo"), and compare
what happens in Emacs.
* (disassemble 'foo)
; disassembly for FOO
; Size: 166 bytes. Origin: #x225D873F ; FOO
; 3F: 488B042590060020 MOV RAX, [#x20000690]
; 47: 488945F8 MOV [RBP-8], RAX
; 4B: 48892C2560060020 MOV [#x20000660], RBP
; 53: 488B142518000020 MOV RDX, [#x20000018]
; 5B: 488D4210 LEA RAX, [RDX+16]
; 5F: 483B042520000020 CMP RAX, [#x20000020]
; 67: 7770 JA L2
; 69: 4889042518000020 MOV [#x20000018], RAX
; 71: L0: 488B0570FFFFFF MOV RAX, [RIP-144] ; "foo"
; 78: 488902 MOV [RDX], RAX
; 7B: 48C7420817010020 MOV QWORD PTR [RDX+8], #x20000117 ; NIL
; 83: 80CA07 OR DL, 7
; 86: 48312C2560060020 XOR [#x20000660], RBP
; 8E: 7402 JEQ L1
; 90: CC09 INT3 9 ; pending interrupt trap
; 92: L1: 4C8D4424F0 LEA R8, [RSP-16]
; 97: 4883EC30 SUB RSP, 48
; 9B: BFAF0B1520 MOV EDI, #x20150BAF ; 'LIST
; A0: 488B3551FFFFFF MOV RSI, [RIP-175] ; '(VALUES
; (SIMPLE-ARRAY ..))
; A7: 488B0552FFFFFF MOV RAX, [RIP-174] ; '("foo")
; AE: 498940F0 MOV [R8-16], RAX
; B2: 488B054FFFFFFF MOV RAX, [RIP-177] ; "(CAR \"foo\")"
; B9: 498940E8 MOV [R8-24], RAX
; BD: 49C740E017010020 MOV QWORD PTR [R8-32], #x20000117 ; NIL
; C5: B90C000000 MOV ECX, 12
; CA: 498928 MOV [R8], RBP
; CD: 498BE8 MOV RBP, R8
; D0: B882B12620 MOV EAX, #x2026B182 ; #<FDEFN SB-C::%COMPILE-TIME-TYPE-ERROR>
; D5: FFD0 CALL RAX
; D7: CC10 INT3 16 ; Invalid argument count trap
; D9: L2: 6A10 PUSH 16
; DB: FF1425B0080020 CALL [#x200008B0] ; #x21A00540: LIST-ALLOC-TRAMP
; E2: 5A POP RDX
; E3: EB8C JMP L0
NIL
*
> But isn't that what we do already with compilation and in
> particular native compilation, why can't that add
> optimizations for the native system?
If we talk about type checking, Elisp uses dynamic typing and
compilation cannot do much about it. Native compilation also does not
touch C subroutines - the place where typechecks are performed.
SBCL implements a Lisp, Lisp by definition is dynamically typed.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 8:28 ` Alfred M. Szmidt
@ 2023-08-20 9:29 ` Emanuel Berg
2023-08-20 15:22 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-20 9:29 UTC (permalink / raw)
To: emacs-devel
Alfred M. Szmidt wrote:
> And even this is not a definitive answer. I do not think
> that we can point out a single reason why SBCL is faster.
> I am not even sure if SBCL is _always_ faster.
>
> Always faster than what? What are you comparing?
We are working on it.
> SBCL is a compiler, Emacs is more than that.
Including SBCL with SLIME, but that would still be CL with
SBCL and not Elisp which is what we are (not) comparing with.
> It should be quite obvious why SBCL is faster than the Emacs
> Lisp VM (or even native). Just look at this call to (car
> "foo"), and compare what happens in Emacs.
>
> * (disassemble 'foo)
> ; disassembly for FOO
> ; Size: 166 bytes. Origin: #x225D873F ; FOO
> ; 3F: 488B042590060020 MOV RAX, [#x20000690]
> ; 47: 488945F8 MOV [RBP-8], RAX
> ; 4B: 48892C2560060020 MOV [#x20000660], RBP
> ; 53: 488B142518000020 MOV RDX, [#x20000018]
> ; 5B: 488D4210 LEA RAX, [RDX+16]
> ; 5F: 483B042520000020 CMP RAX, [#x20000020]
> ; 67: 7770 JA L2
> ; 69: 4889042518000020 MOV [#x20000018], RAX
> ; 71: L0: 488B0570FFFFFF MOV RAX, [RIP-144] ; "foo"
> ; 78: 488902 MOV [RDX], RAX
> ; 7B: 48C7420817010020 MOV QWORD PTR [RDX+8], #x20000117 ; NIL
> ; 83: 80CA07 OR DL, 7
> ; 86: 48312C2560060020 XOR [#x20000660], RBP
> ; 8E: 7402 JEQ L1
> ; 90: CC09 INT3 9 ; pending interrupt trap
> ; 92: L1: 4C8D4424F0 LEA R8, [RSP-16]
> ; 97: 4883EC30 SUB RSP, 48
> ; 9B: BFAF0B1520 MOV EDI, #x20150BAF ; 'LIST
> ; A0: 488B3551FFFFFF MOV RSI, [RIP-175] ; '(VALUES
> ; (SIMPLE-ARRAY ..))
> ; A7: 488B0552FFFFFF MOV RAX, [RIP-174] ; '("foo")
> ; AE: 498940F0 MOV [R8-16], RAX
> ; B2: 488B054FFFFFFF MOV RAX, [RIP-177] ; "(CAR \"foo\")"
> ; B9: 498940E8 MOV [R8-24], RAX
> ; BD: 49C740E017010020 MOV QWORD PTR [R8-32], #x20000117 ; NIL
> ; C5: B90C000000 MOV ECX, 12
> ; CA: 498928 MOV [R8], RBP
> ; CD: 498BE8 MOV RBP, R8
> ; D0: B882B12620 MOV EAX, #x2026B182 ; #<FDEFN SB-C::%COMPILE-TIME-TYPE-ERROR>
> ; D5: FFD0 CALL RAX
> ; D7: CC10 INT3 16 ; Invalid argument count trap
> ; D9: L2: 6A10 PUSH 16
> ; DB: FF1425B0080020 CALL [#x200008B0] ; #x21A00540: LIST-ALLOC-TRAMP
> ; E2: 5A POP RDX
> ; E3: EB8C JMP L0
> NIL
> *
Okay?
>> If we talk about type checking, Elisp uses dynamic typing
>> and compilation cannot do much about it. Native compilation
>> also does not touch C subroutines - the place where
>> typechecks are performed.
>
> SBCL implements a Lisp, Lisp by definition is
> dynamically typed.
Only for the kind of use (code) that we are used to. See this:
https://medium.com/@MartinCracauer/static-type-checking-in-the-programmable-programming-language-lisp-79bb79eb068a
For example
(defunt meh5c ((int p1) (int p2))
(+ p1 p2))
(meh5c 1 2) ; ==> 3
with defunt being a macro that uses declare.
A simple example is given earlier in the text,
(defun meh (p1)
(declare (fixnum p1))
(+ p1 3))
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 9:29 ` Emanuel Berg
@ 2023-08-20 15:22 ` Alfred M. Szmidt
2023-08-20 15:36 ` Ihor Radchenko
2023-08-20 20:32 ` Emanuel Berg
0 siblings, 2 replies; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 15:22 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Please keep the CC intact, not everyone subscribed.
> It should be quite obvious why SBCL is faster than the Emacs
> Lisp VM (or even native). Just look at this call to (car
> "foo"), and compare what happens in Emacs.
>
> * (disassemble 'foo)
> ; disassembly for FOO
> ; Size: 166 bytes. Origin: #x225D873F ; FOO
> ; 3F: 488B042590060020 MOV RAX, [#x20000690]
> ; 47: 488945F8 MOV [RBP-8], RAX
> ; 4B: 48892C2560060020 MOV [#x20000660], RBP
> ; 53: 488B142518000020 MOV RDX, [#x20000018]
> ; 5B: 488D4210 LEA RAX, [RDX+16]
> ; 5F: 483B042520000020 CMP RAX, [#x20000020]
> ; 67: 7770 JA L2
> ; 69: 4889042518000020 MOV [#x20000018], RAX
> ; 71: L0: 488B0570FFFFFF MOV RAX, [RIP-144] ; "foo"
> ; 78: 488902 MOV [RDX], RAX
> ; 7B: 48C7420817010020 MOV QWORD PTR [RDX+8], #x20000117 ; NIL
> ; 83: 80CA07 OR DL, 7
> ; 86: 48312C2560060020 XOR [#x20000660], RBP
> ; 8E: 7402 JEQ L1
> ; 90: CC09 INT3 9 ; pending interrupt trap
> ; 92: L1: 4C8D4424F0 LEA R8, [RSP-16]
> ; 97: 4883EC30 SUB RSP, 48
> ; 9B: BFAF0B1520 MOV EDI, #x20150BAF ; 'LIST
> ; A0: 488B3551FFFFFF MOV RSI, [RIP-175] ; '(VALUES
> ; (SIMPLE-ARRAY ..))
> ; A7: 488B0552FFFFFF MOV RAX, [RIP-174] ; '("foo")
> ; AE: 498940F0 MOV [R8-16], RAX
> ; B2: 488B054FFFFFFF MOV RAX, [RIP-177] ; "(CAR \"foo\")"
> ; B9: 498940E8 MOV [R8-24], RAX
> ; BD: 49C740E017010020 MOV QWORD PTR [R8-32], #x20000117 ; NIL
> ; C5: B90C000000 MOV ECX, 12
> ; CA: 498928 MOV [R8], RBP
> ; CD: 498BE8 MOV RBP, R8
> ; D0: B882B12620 MOV EAX, #x2026B182 ; #<FDEFN SB-C::%COMPILE-TIME-TYPE-ERROR>
> ; D5: FFD0 CALL RAX
> ; D7: CC10 INT3 16 ; Invalid argument count trap
> ; D9: L2: 6A10 PUSH 16
> ; DB: FF1425B0080020 CALL [#x200008B0] ; #x21A00540: LIST-ALLOC-TRAMP
> ; E2: 5A POP RDX
> ; E3: EB8C JMP L0
> NIL
> *
Okay?
I guess that you do not understand the above? Or what? Do you know
and understand what happens in Emacs when a similar call is done? It
is far more than "166 bytes".
>> If we talk about type checking, Elisp uses dynamic typing
>> and compilation cannot do much about it. Native compilation
>> also does not touch C subroutines - the place where
>> typechecks are performed.
>
> SBCL implements a Lisp, Lisp by definition is
> dynamically typed.
Only for the kind of use (code) that we are used to. See this:
https://medium.com/@MartinCracauer/static-type-checking-in-the-programmable-programming-language-lisp-79bb79eb068a
This has literally nothing to do with the difference between static
typing, and dynamic typing. The author, and you, have it completeley
backwards to the point where I need to suggest that you take sometime
to read up on basic Lisp compilers, and then look into very good Lisp
compilers (CMUCL and SBCL come to mind). Since it is already showing
that it is very hard to even explain basic Lisp compiler behaviour
without going to fundamentals.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:22 ` Alfred M. Szmidt
@ 2023-08-20 15:36 ` Ihor Radchenko
2023-08-20 15:45 ` Eli Zaretskii
` (2 more replies)
2023-08-20 20:32 ` Emanuel Berg
1 sibling, 3 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 15:36 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: Emanuel Berg, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> Please keep the CC intact, not everyone subscribed.
>
> > It should be quite obvious why SBCL is faster than the Emacs
> > Lisp VM (or even native). Just look at this call to (car
> > "foo"), and compare what happens in Emacs.
> >
> > * (disassemble 'foo)
> > ; disassembly for FOO
> > ; Size: 166 bytes. Origin: #x225D873F ; FOO
>> ...
> Okay?
>
> I guess that you do not understand the above? Or what? Do you know
> and understand what happens in Emacs when a similar call is done? It
> is far more than "166 bytes".
It would be helpful if you show us what happens in Elisp with a similar
call. Especially after native compilation.
I am asking genuinely because `car' (1) has dedicated opt code and thus
should be one of the best-optimized function calls on Elisp side; (2)
Fcar is nothing but
/* Take the car or cdr of something whose type is not known. */
INLINE Lisp_Object
CAR (Lisp_Object c)
{
if (CONSP (c))
return XCAR (c); // <- XCONS (c)->u.s.car
if (!NILP (c))
wrong_type_argument (Qlistp, c);
return Qnil;
}
So, it is a very simple example that can actually explain the basic
differences between Elisp and CL. It would be nice if you (considering
your low-level understanding) can provide us with an analysis of what is
different between Elisp and CL implementations of such a simple
function.
> This has literally nothing to do with the difference between static
> typing, and dynamic typing. The author, and you, have it completeley
> backwards ...
I am sorry, because it was my message that started the confusion.
I was mostly referring to separation between Elisp
interpreted/byte/native code and C subrs. AFAIU, static analysis info
cannot be passed between these two parts of Emacs runtime: subr cannot
know in advance what Lisp_Object type it is working on, even if static
analysis of the caller Elisp code has such information (e.g. from GCC
JIT compiler).
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:36 ` Ihor Radchenko
@ 2023-08-20 15:45 ` Eli Zaretskii
2023-08-20 15:54 ` Ihor Radchenko
2023-08-27 3:25 ` Emanuel Berg
2023-08-20 16:03 ` Alfred M. Szmidt
2023-08-20 19:14 ` Eli Zaretskii
2 siblings, 2 replies; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-20 15:45 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Emanuel Berg <incal@dataswamp.org>, emacs-devel@gnu.org
> Date: Sun, 20 Aug 2023 15:36:34 +0000
>
> "Alfred M. Szmidt" <ams@gnu.org> writes:
>
> > I guess that you do not understand the above? Or what? Do you know
> > and understand what happens in Emacs when a similar call is done? It
> > is far more than "166 bytes".
>
> It would be helpful if you show us what happens in Elisp with a similar
> call.
See below.
> Especially after native compilation.
Native compilation doesn't affect 'car', because it's a primitive.
> I am asking genuinely because `car' (1) has dedicated opt code and thus
> should be one of the best-optimized function calls on Elisp side; (2)
> Fcar is nothing but
>
> /* Take the car or cdr of something whose type is not known. */
> INLINE Lisp_Object
> CAR (Lisp_Object c)
> {
> if (CONSP (c))
> return XCAR (c); // <- XCONS (c)->u.s.car
> if (!NILP (c))
> wrong_type_argument (Qlistp, c);
> return Qnil;
> }
It's very easy to see the code of 'car' in Emacs. All you need is run
GDB:
$ gdb ./emacs
...
(gdb) disassemble /m Fcar
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:45 ` Eli Zaretskii
@ 2023-08-20 15:54 ` Ihor Radchenko
2023-08-20 16:29 ` Alfred M. Szmidt
2023-08-27 3:25 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 15:54 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: ams, incal, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
>> It would be helpful if you show us what happens in Elisp with a similar
>> call.
>
> See below.
Sorry, I was not clear. I was asking to help comparing between
disassembly of Elisp and CL versions. I myself is not familiar with
assembly code.
> It's very easy to see the code of 'car' in Emacs. All you need is run
> GDB:
>
> $ gdb ./emacs
> ...
> (gdb) disassemble /m Fcar
So, while I can do this mechanically, it will not understand it.
Not to the level to draw conclusions about what is different in Elisp
compared to CL.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:54 ` Ihor Radchenko
@ 2023-08-20 16:29 ` Alfred M. Szmidt
2023-08-20 16:37 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 16:29 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
> It's very easy to see the code of 'car' in Emacs. All you need is run
> GDB:
>
> $ gdb ./emacs
> ...
> (gdb) disassemble /m Fcar
So, while I can do this mechanically, it will not understand it.
Not to the level to draw conclusions about what is different in Elisp
compared to CL.
The issue is not Emacs Lisp vs. Common Lisp. What you mean is what
the difference is between SBCL and GNU Emacs.'
The question can be rephrased as what is the difference between GNU
Emacs and GCC? Why is GCC so much faster? And if you phrase it like
that you will see that it really doesn't make much sense anymore,
since you are comparing different things.
Emacs could implement optimizaations that GCC does for C, or ADA
.. but it breaks down very quickly.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 16:29 ` Alfred M. Szmidt
@ 2023-08-20 16:37 ` Ihor Radchenko
2023-08-20 17:19 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 16:37 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> The issue is not Emacs Lisp vs. Common Lisp. What you mean is what
> the difference is between SBCL and GNU Emacs.'
>
> The question can be rephrased as what is the difference between GNU
> Emacs and GCC? Why is GCC so much faster? And if you phrase it like
> that you will see that it really doesn't make much sense anymore,
> since you are comparing different things.
> Emacs could implement optimizaations that GCC does for C, or ADA
> .. but it breaks down very quickly.
Not really. Native compilation already uses GCC. At least on the byte
code instructions and, separately, in subr code.
There is more than just GCC vs byte code VM in it.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 16:37 ` Ihor Radchenko
@ 2023-08-20 17:19 ` Alfred M. Szmidt
2023-08-20 17:31 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 17:19 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
> The issue is not Emacs Lisp vs. Common Lisp. What you mean is what
> the difference is between SBCL and GNU Emacs.'
>
> The question can be rephrased as what is the difference between GNU
> Emacs and GCC? Why is GCC so much faster? And if you phrase it like
> that you will see that it really doesn't make much sense anymore,
> since you are comparing different things.
> Emacs could implement optimizaations that GCC does for C, or ADA
> .. but it breaks down very quickly.
Not really. Native compilation already uses GCC. At least on the byte
code instructions and, separately, in subr code.
There is more than just GCC vs byte code VM in it.
It is not about native compilation! It is about what OPTIMIZATIONS
can be done to the actual code flow. Just using GCC doesn't do ANY
optimizations to how the Lisp code is optimized or how its flow is
changed due to optimizations!
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 17:19 ` Alfred M. Szmidt
@ 2023-08-20 17:31 ` Ihor Radchenko
2023-08-20 18:54 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 17:31 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> Not really. Native compilation already uses GCC. At least on the byte
> code instructions and, separately, in subr code.
> There is more than just GCC vs byte code VM in it.
>
> It is not about native compilation! It is about what OPTIMIZATIONS
> can be done to the actual code flow. Just using GCC doesn't do ANY
> optimizations to how the Lisp code is optimized or how its flow is
> changed due to optimizations!
Then, what does GCC do? AFAIK, GCC JIT takes the Elisp byte code,
transforms it into JIT pseudocode, and optimizes the actual code flow.
For example, when I write
(when (> x y) (when (> x y) x))
I expect GCC JIT to throw away the duplicate comparison.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 17:31 ` Ihor Radchenko
@ 2023-08-20 18:54 ` Alfred M. Szmidt
2023-08-20 19:07 ` Eli Zaretskii
` (2 more replies)
0 siblings, 3 replies; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 18:54 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> Not really. Native compilation already uses GCC. At least on the byte
> code instructions and, separately, in subr code.
> There is more than just GCC vs byte code VM in it.
>
> It is not about native compilation! It is about what OPTIMIZATIONS
> can be done to the actual code flow. Just using GCC doesn't do ANY
> optimizations to how the Lisp code is optimized or how its flow is
> changed due to optimizations!
Then, what does GCC do? AFAIK, GCC JIT takes the Elisp byte code,
transforms it into JIT pseudocode, and optimizes the actual code flow.
What does GCC do _WHERE_? What backend? What language? You're
speaking in such broad terms that it makes it impossible to continue
this discussion. I don't know how the native compilation works, but
no matter what you feed to GCC it cannot do magic and any optimization
should be done on what the Emacs compiler does.
For example, when I write
(when (> x y) (when (> x y) x))
I expect GCC JIT to throw away the duplicate comparison.
Why do you expect that? Why do you think it is duplicate? Where are
the guarantees that > or WHEN don't have side-effects? Do you know
the exact type of X and Y so you can skip a cascade of type checks to
pick the right comparison operator? Can you use fixnum comparison of
a specific bit width? Do you need to use bignum comparison?
That is the type of information SBCL knows about, or allows the user
to specify. Emacs does not have that today, and that incures one set
of overhead. There are plenty more...
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 18:54 ` Alfred M. Szmidt
@ 2023-08-20 19:07 ` Eli Zaretskii
2023-08-27 3:53 ` Emanuel Berg
2023-08-20 19:15 ` Ihor Radchenko
2023-08-27 3:48 ` Emanuel Berg
2 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-20 19:07 UTC (permalink / raw)
To: Alfred M. Szmidt, Andrea Corallo; +Cc: yantar92, incal, emacs-devel
> From: "Alfred M. Szmidt" <ams@gnu.org>
> Cc: eliz@gnu.org, incal@dataswamp.org, emacs-devel@gnu.org
> Date: Sun, 20 Aug 2023 14:54:46 -0400
>
>
> "Alfred M. Szmidt" <ams@gnu.org> writes:
>
> > Not really. Native compilation already uses GCC. At least on the byte
> > code instructions and, separately, in subr code.
> > There is more than just GCC vs byte code VM in it.
> >
> > It is not about native compilation! It is about what OPTIMIZATIONS
> > can be done to the actual code flow. Just using GCC doesn't do ANY
> > optimizations to how the Lisp code is optimized or how its flow is
> > changed due to optimizations!
>
> Then, what does GCC do? AFAIK, GCC JIT takes the Elisp byte code,
> transforms it into JIT pseudocode, and optimizes the actual code flow.
>
> What does GCC do _WHERE_? What backend? What language? You're
> speaking in such broad terms that it makes it impossible to continue
> this discussion. I don't know how the native compilation works, but
> no matter what you feed to GCC it cannot do magic and any optimization
> should be done on what the Emacs compiler does.
>
> For example, when I write
>
> (when (> x y) (when (> x y) x))
>
> I expect GCC JIT to throw away the duplicate comparison.
>
> Why do you expect that? Why do you think it is duplicate? Where are
> the guarantees that > or WHEN don't have side-effects?
Andrea will correct me if I'm wrong, but AFAIU Ihor is correct: native
compilation in Emacs emits a kind of GIMPLE, which is then subject to
GCC optimization pass. That's why we have the native-comp-speed
variable, which is mapped directly into the GCC's -On optimization
switches, with n = 0..3.
Maybe the SBCL compiler has better optimizations, but it is incorrect
to say that Emacs's native-compilation has none.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:07 ` Eli Zaretskii
@ 2023-08-27 3:53 ` Emanuel Berg
0 siblings, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 3:53 UTC (permalink / raw)
To: emacs-devel
Eli Zaretskii wrote:
> Andrea will correct me if I'm wrong, but AFAIU Ihor is
> correct: native compilation in Emacs emits a kind of GIMPLE,
> which is then subject to GCC optimization pass. That's why
> we have the native-comp-speed variable, which is mapped
> directly into the GCC's -On optimization switches, with n =
> 0..3.
The docstring for `native-comp-speed'.
Optimization level for native compilation, a number between -1 and 3.
-1 functions are kept in bytecode form and no native compilation is performed
(but *.eln files are still produced, and include the compiled code in
bytecode form).
0 native compilation is performed with no optimizations.
1 light optimizations.
2 max optimization level fully adherent to the language semantic.
3 max optimization level, to be used only when necessary.
Warning: with 3, the compiler is free to perform dangerous optimizations.
I have that 2, which is the default value. Maybe I should try
3 and see if something breaks. I wonder what those dangerous
optimizations are?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 18:54 ` Alfred M. Szmidt
2023-08-20 19:07 ` Eli Zaretskii
@ 2023-08-20 19:15 ` Ihor Radchenko
2023-08-20 19:24 ` Ihor Radchenko
` (2 more replies)
2023-08-27 3:48 ` Emanuel Berg
2 siblings, 3 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 19:15 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> Then, what does GCC do? AFAIK, GCC JIT takes the Elisp byte code,
> transforms it into JIT pseudocode, and optimizes the actual code flow.
>
> What does GCC do _WHERE_? What backend? What language? You're
> speaking in such broad terms that it makes it impossible to continue
> this discussion. I don't know how the native compilation works, but
> no matter what you feed to GCC it cannot do magic and any optimization
> should be done on what the Emacs compiler does.
Native compilation provides the necessary information about Elisp to GCC.
Otherwise, native compilation would be useless.
You may check out the details in
https://zenodo.org/record/3736363 and
https://toobnix.org/w/1f997b3c-00dc-4f7d-b2ce-74538c194fa7
> For example, when I write
>
> (when (> x y) (when (> x y) x))
>
> I expect GCC JIT to throw away the duplicate comparison.
>
> Why do you expect that? Why do you think it is duplicate? Where are
> the guarantees that > or WHEN don't have side-effects? Do you know
> the exact type of X and Y so you can skip a cascade of type checks to
> pick the right comparison operator? Can you use fixnum comparison of
> a specific bit width? Do you need to use bignum comparison?
At least some of these questions are answered by the code on Emacs side.
Native compiler transforms the Elisp byte code, using its knowledge
about function purity, types, and maybe other things, into LIMP that can
be fed to GCC JIT. Then, GCC JIT uses the provided info to do actual
optimization.
> That is the type of information SBCL knows about, or allows the user
> to specify. Emacs does not have that today, and that incures one set
> of overhead. There are plenty more...
AFAIK, users cannot specify type info manually, but types are tracked
when transforming Elisp byte code into LIMP representation.
The only problem (AFAIU) is that GCC JIT cannot reach inside subr level,
so all these information does not benefit Emacs functions implemented in
C.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:15 ` Ihor Radchenko
@ 2023-08-20 19:24 ` Ihor Radchenko
2023-08-21 2:33 ` Eli Zaretskii
2023-08-28 4:41 ` Emanuel Berg
2023-08-20 20:15 ` Alfred M. Szmidt
2023-08-27 4:01 ` Emanuel Berg
2 siblings, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 19:24 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> The only problem (AFAIU) is that GCC JIT cannot reach inside subr level,
> so all these information does not benefit Emacs functions implemented in
> C.
If I am right here, it might actually be worth it to rewrite some of the
subroutines into Elisp. For example rounding_driver (called by
`floor') code is full of runtime type checks:
CHECK_NUMBER (n);
if (NILP (d))
...
CHECK_NUMBER (d);
...
if (FIXNUMP (d))
if (XFIXNUM (d) == 0)
...
if (FIXNUMP (n))
...
else if (FLOATP (d))
if (XFLOAT_DATA (d) == 0)
int nscale = FLOATP (n) ? double_integer_scale (XFLOAT_DATA (n)) : 0;
..
During native compilation, if type information and n and d is available,
GCC might have a chance to cut a number of branches away from the above
code.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:24 ` Ihor Radchenko
@ 2023-08-21 2:33 ` Eli Zaretskii
2023-08-21 4:11 ` Ihor Radchenko
2023-08-28 4:41 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 2:33 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: eliz@gnu.org, incal@dataswamp.org, emacs-devel@gnu.org
> Date: Sun, 20 Aug 2023 19:24:36 +0000
>
> If I am right here, it might actually be worth it to rewrite some of the
> subroutines into Elisp. For example rounding_driver (called by
> `floor') code is full of runtime type checks:
>
> CHECK_NUMBER (n);
> if (NILP (d))
> ...
> CHECK_NUMBER (d);
> ...
> if (FIXNUMP (d))
> if (XFIXNUM (d) == 0)
> ...
> if (FIXNUMP (n))
> ...
> else if (FLOATP (d))
> if (XFLOAT_DATA (d) == 0)
> int nscale = FLOATP (n) ? double_integer_scale (XFLOAT_DATA (n)) : 0;
> ..
>
> During native compilation, if type information and n and d is available,
> GCC might have a chance to cut a number of branches away from the above
> code.
Cut them how? AFAICT, none of the tests above are redundant.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 2:33 ` Eli Zaretskii
@ 2023-08-21 4:11 ` Ihor Radchenko
2023-08-21 4:15 ` Po Lu
2023-08-21 10:48 ` Eli Zaretskii
0 siblings, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 4:11 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: ams, incal, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
>> CHECK_NUMBER (n);
>> if (NILP (d))
>> return FLOATP (n) ? double_to_integer (double_round (XFLOAT_DATA (n))) : n;
>> ...
>> During native compilation, if type information and n and d is available,
>> GCC might have a chance to cut a number of branches away from the above
>> code.
>
> Cut them how? AFAICT, none of the tests above are redundant.
Consider the following:
(let ((a 10))
(setq a (+ a 100))
(floor a nil))
During compilation of the above code, the compiler will know that a is a
positive integer. Therefore, CHECK_NUMBER, NILP, and FLOATP are not
necessary and can be omitted in the call to `floor':
(let ((a 10))
(setq a (+ a 100))
a)
However, GCC JIT has no information about the internal structure of the
`floor' subr. Hence, it is currently unable to perform such
optimization.
It could, if it were somehow given an information about `floor'
implementation.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 4:11 ` Ihor Radchenko
@ 2023-08-21 4:15 ` Po Lu
2023-08-21 4:36 ` Ihor Radchenko
2023-08-21 10:48 ` Eli Zaretskii
1 sibling, 1 reply; 247+ messages in thread
From: Po Lu @ 2023-08-21 4:15 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Eli Zaretskii, ams, incal, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> (let ((a 10))
> (setq a (+ a 100))
> (floor a nil))
>
> During compilation of the above code, the compiler will know that a is a
> positive integer. Therefore, CHECK_NUMBER, NILP, and FLOATP are not
> necessary and can be omitted in the call to `floor':
>
> (let ((a 10))
> (setq a (+ a 100))
> a)
>
> However, GCC JIT has no information about the internal structure of the
> `floor' subr. Hence, it is currently unable to perform such
> optimization.
>
> It could, if it were somehow given an information about `floor'
> implementation.
This should thus be implemented in the native compiler, without
affecting the code of Ffloor itself.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 4:15 ` Po Lu
@ 2023-08-21 4:36 ` Ihor Radchenko
2023-08-21 4:43 ` Po Lu
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 4:36 UTC (permalink / raw)
To: Po Lu; +Cc: Eli Zaretskii, ams, incal, emacs-devel
Po Lu <luangruo@yahoo.com> writes:
>> However, GCC JIT has no information about the internal structure of the
>> `floor' subr. Hence, it is currently unable to perform such
>> optimization.
>>
>> It could, if it were somehow given an information about `floor'
>> implementation.
>
> This should thus be implemented in the native compiler, without
> affecting the code of Ffloor itself.
I do understand that the approach you propose is indeed used, for
example, for `car' in emit_lval_XCAR. However, is it practical for
functions like `floor'?
`car' implementation is very unlikely to change in future. But `floor'
and other functions (we should not be limited to `floor') may change
their implementations. The extra "native comp" copy of the
implementation will need to be always synchronized with the original
implementation. I doubt that it is practical maintenance-wise.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 4:36 ` Ihor Radchenko
@ 2023-08-21 4:43 ` Po Lu
2023-08-21 5:06 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Po Lu @ 2023-08-21 4:43 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Eli Zaretskii, ams, incal, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> I do understand that the approach you propose is indeed used, for
> example, for `car' in emit_lval_XCAR. However, is it practical for
> functions like `floor'?
>
> `car' implementation is very unlikely to change in future.
Why not?
> But `floor' and other functions (we should not be limited to `floor')
> may change their implementations. The extra "native comp" copy of the
> implementation will need to be always synchronized with the original
> implementation. I doubt that it is practical maintenance-wise.
How and why so? How are Fcar and Ffloor divergent in this department?
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 4:43 ` Po Lu
@ 2023-08-21 5:06 ` Ihor Radchenko
2023-08-21 5:25 ` [External] : " Drew Adams
` (3 more replies)
0 siblings, 4 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 5:06 UTC (permalink / raw)
To: Po Lu; +Cc: Eli Zaretskii, ams, incal, emacs-devel
Po Lu <luangruo@yahoo.com> writes:
>> `car' implementation is very unlikely to change in future.
>
> Why not?
Mostly because such basic functions are rarely changed.
Of course, it is not impossible that `car' is changed in future.
>> But `floor' and other functions (we should not be limited to `floor')
>> may change their implementations. The extra "native comp" copy of the
>> implementation will need to be always synchronized with the original
>> implementation. I doubt that it is practical maintenance-wise.
>
> How and why so? How are Fcar and Ffloor divergent in this department?
`floor' might also be rather stable. I was mostly referring to "we
should not be limited to `floor'" - it may be a problem for other
functions.
But let me rephrase it in other terms: what you propose will require
maintaining two separate implementations of subroutines - one in C, and
one specially tailored to GCC JIT psudocode. This may be doable for a
small set of core primitives, but not scalable if we want to make more
subroutines benefit from GGC JIT optimizations.
Another idea, if rewriting in Elisp is not feasible, could be somehow
structuring the internal C code in such a way that we can derive GCC
JIT pseudocode right from the C function bodies.
For example, Ffloor could be (1) split into smaller functions dedicated
to certain argument type combinations; (2) record a metadata readable by
native comp code about which small function correspond to different
argument types. Then, native comp can emit direct calls to these smaller
(and faster) functions when the type is known.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* RE: [External] : Re: Shrinking the C core
2023-08-21 5:06 ` Ihor Radchenko
@ 2023-08-21 5:25 ` Drew Adams
2023-08-21 5:34 ` Po Lu
` (2 subsequent siblings)
3 siblings, 0 replies; 247+ messages in thread
From: Drew Adams @ 2023-08-21 5:25 UTC (permalink / raw)
To: Ihor Radchenko, Po Lu
Cc: Eli Zaretskii, ams@gnu.org, incal@dataswamp.org,
emacs-devel@gnu.org
> Mostly because such basic functions are rarely changed.
> Of course, it is not impossible that `car' is changed in future.
Perhaps it will become "electric"...
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 5:06 ` Ihor Radchenko
2023-08-21 5:25 ` [External] : " Drew Adams
@ 2023-08-21 5:34 ` Po Lu
2023-08-21 9:17 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Ihor Radchenko
2023-08-27 2:04 ` Shrinking the C core Emanuel Berg
2023-08-21 7:59 ` Gregory Heytings
2023-08-27 5:31 ` Emanuel Berg
3 siblings, 2 replies; 247+ messages in thread
From: Po Lu @ 2023-08-21 5:34 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Eli Zaretskii, ams, incal, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> But let me rephrase it in other terms: what you propose will require
> maintaining two separate implementations of subroutines - one in C, and
> one specially tailored to GCC JIT psudocode. This may be doable for a
> small set of core primitives, but not scalable if we want to make more
> subroutines benefit from GGC JIT optimizations.
I'm inclined to believe that type checks within those more complex
functions do not contribute so much to the runtime of most
native-compiled functions as the small set of arithmetic primitives do.
> Another idea, if rewriting in Elisp is not feasible, could be somehow
> structuring the internal C code in such a way that we can derive GCC
> JIT pseudocode right from the C function bodies.
>
> For example, Ffloor could be (1) split into smaller functions dedicated
> to certain argument type combinations; (2) record a metadata readable by
> native comp code about which small function correspond to different
> argument types. Then, native comp can emit direct calls to these smaller
> (and faster) functions when the type is known.
That sounds like over-engineering, especially given that an actual
performance problem (in real text editing tasks) has yet to be ascribed
to Ffloor.
Can we all stop bikeshedding over this now? By this point, the subject
of this thread bears absolutely no relation to the debate within.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 5:34 ` Po Lu
@ 2023-08-21 9:17 ` Ihor Radchenko
2023-08-21 9:42 ` Gregory Heytings
2023-08-21 11:12 ` Add more supported primitives in libgccjit IR Eli Zaretskii
2023-08-27 2:04 ` Shrinking the C core Emanuel Berg
1 sibling, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 9:17 UTC (permalink / raw)
To: Po Lu, Andrea Corallo; +Cc: Eli Zaretskii, ams, incal, emacs-devel
Po Lu <luangruo@yahoo.com> writes:
> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> But let me rephrase it in other terms: what you propose will require
>> maintaining two separate implementations of subroutines - one in C, and
>> one specially tailored to GCC JIT psudocode. This may be doable for a
>> small set of core primitives, but not scalable if we want to make more
>> subroutines benefit from GGC JIT optimizations.
>
> I'm inclined to believe that type checks within those more complex
> functions do not contribute so much to the runtime of most
> native-compiled functions as the small set of arithmetic primitives do.
I am pretty sure that it depends on the specific use case.
On average, you might be right though.
Just to get something going, I executed
https://elpa.gnu.org/packages/elisp-benchmarks.html benchmarks and
looked into the primitives that take significant amount of time:
3.85% emacs emacs [.] arith_driver
2.62% emacs emacs [.] Fgtr
2.31% emacs emacs [.] check_number_coerce_marker
2.24% emacs emacs [.] Fmemq
2.20% emacs emacs [.] Flss
1.56% emacs emacs [.] arithcompare
1.12% emacs emacs [.] Faset
1.10% emacs emacs [.] Fcar_safe
0.97% emacs emacs [.] Faref
0.94% emacs emacs [.] Fplus
0.93% emacs emacs [.] float_arith_driver
0.58% emacs emacs [.] Feqlsign
We may consider directly supporting some of these functions in native
compile libgccjit IR code to get rid of runtime type checks.
> Can we all stop bikeshedding over this now? By this point, the subject
> of this thread bears absolutely no relation to the debate within.
The thread evolved to general Elisp performance discussion and this
particular branch evolved to compiler discussion. Updating the subject.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 9:17 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Ihor Radchenko
@ 2023-08-21 9:42 ` Gregory Heytings
2023-08-21 10:36 ` Ihor Radchenko
2023-08-21 11:02 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Alfred M. Szmidt
2023-08-21 11:12 ` Add more supported primitives in libgccjit IR Eli Zaretskii
1 sibling, 2 replies; 247+ messages in thread
From: Gregory Heytings @ 2023-08-21 9:42 UTC (permalink / raw)
To: Ihor Radchenko
Cc: Po Lu, Andrea Corallo, Eli Zaretskii, ams, incal, emacs-devel
>
> Just to get something going, I executed
> https://elpa.gnu.org/packages/elisp-benchmarks.html benchmarks and
> looked into the primitives that take significant amount of time:
>
> 3.85% emacs emacs [.] arith_driver
> 2.62% emacs emacs [.] Fgtr
> 2.31% emacs emacs [.] check_number_coerce_marker
> 2.24% emacs emacs [.] Fmemq
> 2.20% emacs emacs [.] Flss
> 1.56% emacs emacs [.] arithcompare
> 1.12% emacs emacs [.] Faset
> 1.10% emacs emacs [.] Fcar_safe
> 0.97% emacs emacs [.] Faref
> 0.94% emacs emacs [.] Fplus
> 0.93% emacs emacs [.] float_arith_driver
> 0.58% emacs emacs [.] Feqlsign
>
> We may consider directly supporting some of these functions in native
> compile libgccjit IR code to get rid of runtime type checks.
>
I'm not sure elisp-benchmarks are representative enough of actual Elisp
code, but this is an excellent example of what Alfred tries to convey.
Look at data.c:arith_driver. You'll see that it's essentially a function
which dispatches the handling of its arguments depending on their type: if
the arguments are integers, do something, else if the arguments are
floats, do something, else if the arguments are bignums, do something.
Now look at data.c:Fgtr or data.c:Flss. You'll see that it calls
arithcompare_driver, which calls arithcompare, which again dispatches the
handling of the arguments depending on their types: integer, float,
bignum.
These integer/float/bignum types are not known at compilation time,
because Elisp is a dynamically typed language, which means that the type
of an object can change over its lifetime, an object could be an integer
at some point of time, and later a float, and later again a bignum. In a
statically typed language, these type dispatch operations can be bypassed,
because it is known at compilation time that the arguments are, say,
integers, and that we can simply call the "add" instruction to compute
their sum.
So, in a statically typed language, adding two integers takes a single CPU
cycle. In a dynamically typed language, it can take many CPU cycles.
And of course, using a JIT compiler does not magically transform a
dynamically typed language into a statically typed one: you still need to
do these dynamic dispatches.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 9:42 ` Gregory Heytings
@ 2023-08-21 10:36 ` Ihor Radchenko
2023-08-21 11:02 ` Alfred M. Szmidt
` (2 more replies)
2023-08-21 11:02 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Alfred M. Szmidt
1 sibling, 3 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 10:36 UTC (permalink / raw)
To: Gregory Heytings
Cc: Po Lu, Andrea Corallo, Eli Zaretskii, ams, incal, emacs-devel
Gregory Heytings <gregory@heytings.org> writes:
> I'm not sure elisp-benchmarks are representative enough of actual Elisp
> code...
Any better ideas?
> Look at data.c:arith_driver. You'll see that it's essentially a function
> which dispatches the handling of its arguments depending on their type...
>
> These integer/float/bignum types are not known at compilation time ...
This is not correct. If you have something like
(progn (setq x 1) (> x 2)), compiler is actually able to determine the
type of X at compilation time.
Not to say that such insight is always possible, but it is a big part of
what native compilation does in Elisp. I strongly encourage you to read
through https://zenodo.org/record/3736363 - it describes what is being
done during native compilation.
---
Now, about my actual suggestion here:
Even if native compilation determines some variable types at compile
time, it cannot always make use of this information because it is has
knowledge about byte-compiled Elisp instructions, but not about what is
inside Elisp subr primitives. With a few exceptions described in "Section
3.8 final (code layout)" of the linked paper:
This pass is also responsible for substituting the calls to
selected primitive functions with an equivalent implementation
described in libgccjit IR. This happens for small and frequently used
functions such as: car, cdr, setcar, setcdr, 1+, 1-, or - (negation).
In the my previous message, I have identified a few more candidates to
implement in this libgccjit IR.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 10:36 ` Ihor Radchenko
@ 2023-08-21 11:02 ` Alfred M. Szmidt
2023-08-21 11:41 ` Ihor Radchenko
2023-08-21 11:05 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Gregory Heytings
2023-08-21 11:34 ` Add more supported primitives in libgccjit IR Manuel Giraud via Emacs development discussions.
2 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 11:02 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: gregory, luangruo, akrl, eliz, incal, emacs-devel
> Look at data.c:arith_driver. You'll see that it's essentially a function
> which dispatches the handling of its arguments depending on their type...
>
> These integer/float/bignum types are not known at compilation time ...
This is not correct. If you have something like
(progn (setq x 1) (> x 2)), compiler is actually able to determine the
type of X at compilation time.
It is absolutley correct, the Emacs compiler is not capable of doing
what you are suggesting. There are no specific functions for fixnum
comparison in Emacs Lisp, nor is the Emacs Lisp compiler capable of
being instructed to do such specific things. I've been repeating this
constantly now. That is needed to make programs faster in Lisp.
The reason why SBCL is faster is because it allows individuals to
instruct the compiler to do what is best for the program -- e.g., the
org maintainers can write functions that are more specialized. Native
compilation simply does not solve that!
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 11:02 ` Alfred M. Szmidt
@ 2023-08-21 11:41 ` Ihor Radchenko
2023-08-21 12:20 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 11:41 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: gregory, luangruo, akrl, eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> > Look at data.c:arith_driver. You'll see that it's essentially a function
> > which dispatches the handling of its arguments depending on their type...
> >
> > These integer/float/bignum types are not known at compilation time ...
>
> This is not correct. If you have something like
> (progn (setq x 1) (> x 2)), compiler is actually able to determine the
> type of X at compilation time.
>
> It is absolutley correct, the Emacs compiler is not capable of doing
> what you are suggesting. There are no specific functions for fixnum
> comparison in Emacs Lisp, nor is the Emacs Lisp compiler capable of
> being instructed to do such specific things. I've been repeating this
> constantly now. That is needed to make programs faster in Lisp.
I can see
/*
Define a substitute for Fadd1 Fsub1.
Currently expose just fixnum arithmetic.
*/
static void
define_add1_sub1 (void)
in comp.c
So, there is some type-specific optimization going on.
It looks very limited though.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 11:41 ` Ihor Radchenko
@ 2023-08-21 12:20 ` Eli Zaretskii
2023-08-21 14:49 ` Add more supported primitives in libgccjit IR Andrea Corallo
0 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 12:20 UTC (permalink / raw)
To: Ihor Radchenko, Andrea Corallo; +Cc: ams, gregory, luangruo, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: gregory@heytings.org, luangruo@yahoo.com, akrl@sdf.org, eliz@gnu.org,
> incal@dataswamp.org, emacs-devel@gnu.org
> Date: Mon, 21 Aug 2023 11:41:53 +0000
>
> "Alfred M. Szmidt" <ams@gnu.org> writes:
>
> > > Look at data.c:arith_driver. You'll see that it's essentially a function
> > > which dispatches the handling of its arguments depending on their type...
> > >
> > > These integer/float/bignum types are not known at compilation time ...
> >
> > This is not correct. If you have something like
> > (progn (setq x 1) (> x 2)), compiler is actually able to determine the
> > type of X at compilation time.
> >
> > It is absolutley correct, the Emacs compiler is not capable of doing
> > what you are suggesting. There are no specific functions for fixnum
> > comparison in Emacs Lisp, nor is the Emacs Lisp compiler capable of
> > being instructed to do such specific things. I've been repeating this
> > constantly now. That is needed to make programs faster in Lisp.
>
> I can see
>
> /*
> Define a substitute for Fadd1 Fsub1.
> Currently expose just fixnum arithmetic.
> */
>
> static void
> define_add1_sub1 (void)
>
> in comp.c
>
> So, there is some type-specific optimization going on.
> It looks very limited though.
This discussion is almost useless without Andrea on board, and you are
using hist stale email address. Please use the one I used here
instead.
And I really suggest that people wait for Andrea to chime in, before
discussing code that he wrote and still maintains very actively.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-21 12:20 ` Eli Zaretskii
@ 2023-08-21 14:49 ` Andrea Corallo
2023-08-23 10:11 ` Ihor Radchenko
2023-08-26 0:47 ` Emanuel Berg
0 siblings, 2 replies; 247+ messages in thread
From: Andrea Corallo @ 2023-08-21 14:49 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: Ihor Radchenko, ams, gregory, luangruo, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
>> From: Ihor Radchenko <yantar92@posteo.net>
>> Cc: gregory@heytings.org, luangruo@yahoo.com, akrl@sdf.org, eliz@gnu.org,
>> incal@dataswamp.org, emacs-devel@gnu.org
>> Date: Mon, 21 Aug 2023 11:41:53 +0000
>>
>> "Alfred M. Szmidt" <ams@gnu.org> writes:
>>
>> > > Look at data.c:arith_driver. You'll see that it's essentially a function
>> > > which dispatches the handling of its arguments depending on their type...
>> > >
>> > > These integer/float/bignum types are not known at compilation time ...
>> >
>> > This is not correct. If you have something like
>> > (progn (setq x 1) (> x 2)), compiler is actually able to determine the
>> > type of X at compilation time.
>> >
>> > It is absolutley correct, the Emacs compiler is not capable of doing
>> > what you are suggesting. There are no specific functions for fixnum
>> > comparison in Emacs Lisp, nor is the Emacs Lisp compiler capable of
>> > being instructed to do such specific things. I've been repeating this
>> > constantly now. That is needed to make programs faster in Lisp.
>>
>> I can see
>>
>> /*
>> Define a substitute for Fadd1 Fsub1.
>> Currently expose just fixnum arithmetic.
>> */
>>
>> static void
>> define_add1_sub1 (void)
>>
>> in comp.c
>>
>> So, there is some type-specific optimization going on.
>> It looks very limited though.
>
> This discussion is almost useless without Andrea on board, and you are
> using hist stale email address. Please use the one I used here
> instead.
>
> And I really suggest that people wait for Andrea to chime in, before
> discussing code that he wrote and still maintains very actively.
Hello Eli & all,
sorry for being late to the party, I'm on holiday :)
Anyway to clarify:
Yes the native compiler does value-type inference already (this is how
the return type of functions is computed as well).
Yes one can already type hint an object to the compiler, even if this is
limited to cons and fixnums (making it generic is on my todo list).
Yes would be great IMO to extend this mechanism to function arguments
eventually as well (I might give it a go after summer?).
Yes the backend tries to inline some code when possible (ex
define_add1_sub1).
Yes we could add more of this inlining, the infrastructure is already
there but I personally had no time to work on this :(
Yes would be great to work on this benchmark driven, even if this open
the classic question of what is a set of rapresentative benchmarks.
My next activity when my time is not used by maintenance and other
activities of my life will be focused more on safetyness and correctness
I think. I'd love to work 100% on Emacs but I must pay my bills like
everyone :)
If someone is interested on working on some of those points (or other
areas of the native compiler) I'm happy to provided help as much as I
can.
I'm sorry to observe that this conversation was fueled by someone
explaining mechanisms with no understanding of how our system works,
this makes people just loose their time :/
Best Regards
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-21 14:49 ` Add more supported primitives in libgccjit IR Andrea Corallo
@ 2023-08-23 10:11 ` Ihor Radchenko
2023-08-25 9:19 ` Andrea Corallo
2023-08-26 0:47 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-23 10:11 UTC (permalink / raw)
To: Andrea Corallo; +Cc: Eli Zaretskii, ams, gregory, luangruo, emacs-devel
Andrea Corallo <acorallo@gnu.org> writes:
> Yes the native compiler does value-type inference already (this is how
> the return type of functions is computed as well).
Thanks for the confirmation!
Do I understand correctly that value-type inference is still extremely
limited? I am confused about native compilation results for
(defun test1 ()
(let ((x (list 'a 'b 'c)))
(when (listp x) "Return value")))
(see <https://yhetil.org/emacs-devel/87pm3gfxgi.fsf@localhost/>)
> Yes the backend tries to inline some code when possible (ex
> define_add1_sub1).
>
> Yes we could add more of this inlining, the infrastructure is already
> there but I personally had no time to work on this :(
Do you have any comment on the problem with having multiple parallel
implementations of the same subroutine?
> If someone is interested on working on some of those points (or other
> areas of the native compiler) I'm happy to provided help as much as I
> can.
Is there any detailed information about the format of native compile
debug output?
I tried
(defun test1 ()
(let ((x (list 'a 'b 'c)))
(when (listp x) "Return value")))
(setq native-comp-debug 3)
(setq native-comp-verbose 3)
(native-compile #'test1 "/tmp/test1.eln")
but it is not very clear what exactly is going on there.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-23 10:11 ` Ihor Radchenko
@ 2023-08-25 9:19 ` Andrea Corallo
2023-08-25 11:06 ` Ihor Radchenko
2023-08-27 1:40 ` Emanuel Berg
0 siblings, 2 replies; 247+ messages in thread
From: Andrea Corallo @ 2023-08-25 9:19 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Eli Zaretskii, ams, gregory, luangruo, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> Andrea Corallo <acorallo@gnu.org> writes:
>
>> Yes the native compiler does value-type inference already (this is how
>> the return type of functions is computed as well).
>
> Thanks for the confirmation!
>
> Do I understand correctly that value-type inference is still extremely
> limited?
Why?
> I am confused about native compilation results for
>
> (defun test1 ()
> (let ((x (list 'a 'b 'c)))
> (when (listp x) "Return value")))
>
> (see <https://yhetil.org/emacs-devel/87pm3gfxgi.fsf@localhost/>)
Yes the native compiler is failing to optimize that, one reason is
probably that list is not a pure function. This works better for
example with:
(defun test2 ()
(let ((x '(a b c)))
(when (listp x) "Return value")))
But anyway it should work, trouble is that we call listp on something we
know is a cons
(set #(mvar 12095070 1 boolean) (call listp #(mvar 12094834 1 cons)))
But the result is just a boolean instead of being a t.
If we could have a bug report for this I can work on it as soon as I get
time.
>> Yes the backend tries to inline some code when possible (ex
>> define_add1_sub1).
>>
>> Yes we could add more of this inlining, the infrastructure is already
>> there but I personally had no time to work on this :(
>
> Do you have any comment on the problem with having multiple parallel
> implementations of the same subroutine?
It's not nice but if justified by performance for few core functions I
think is acceptable.
>> If someone is interested on working on some of those points (or other
>> areas of the native compiler) I'm happy to provided help as much as I
>> can.
>
> Is there any detailed information about the format of native compile
> debug output?
Not so far sorry, that's an internal dump format, do you have any
specific question?
> I tried
>
> (defun test1 ()
> (let ((x (list 'a 'b 'c)))
> (when (listp x) "Return value")))
> (setq native-comp-debug 3)
> (setq native-comp-verbose 3)
> (native-compile #'test1 "/tmp/test1.eln")
>
> but it is not very clear what exactly is going on there.
The compiler performs a series of transformations on the code, those are
called "passes". In the *Native-compile-Log* you can see the dump of
the code for each function being compiled in the current intermendiate
rapresentation. You'll see that the first intermediate rapresentation
is LAP, most of the following passes are dumped in LIMPLE.
Best Regards
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-25 9:19 ` Andrea Corallo
@ 2023-08-25 11:06 ` Ihor Radchenko
2023-08-25 14:26 ` Andrea Corallo
2023-08-27 1:40 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-25 11:06 UTC (permalink / raw)
To: Andrea Corallo; +Cc: Eli Zaretskii, ams, gregory, luangruo, emacs-devel
Andrea Corallo <acorallo@gnu.org> writes:
>> Do I understand correctly that value-type inference is still extremely
>> limited?
>
> Why?
Because when I tried to check if there is type optimization, I ran into
that `lisp' + `listp' call that was not optimized.
Are there other known instances of such missing inference?
> If we could have a bug report for this I can work on it as soon as I get
> time.
Done. https://debbugs.gnu.org/cgi/bugreport.cgi?bug=65527
>> Do you have any comment on the problem with having multiple parallel
>> implementations of the same subroutine?
>
> It's not nice but if justified by performance for few core functions I
> think is acceptable.
I just thought if we could have this native compilation-specific
implementations done in Elisp instead of C. AFAIU, it would then be
inlined as needed just as a part of normal nativecomp optimizations. But
the main question if it could be possible to retain C performance in the
generic case when argument values cannot be inferred ahead of time.
>> Is there any detailed information about the format of native compile
>> debug output?
>
> Not so far sorry, that's an internal dump format, do you have any
> specific question?
> ...
> The compiler performs a series of transformations on the code, those are
> called "passes". In the *Native-compile-Log* you can see the dump of
> the code for each function being compiled in the current intermendiate
> rapresentation. You'll see that the first intermediate rapresentation
> is LAP, most of the following passes are dumped in LIMPLE.
I have no questions about passes - they are described in your paper.
Though it would be nice to put a reference to it in log buffer, manual,
or even share the paper together with Emacs sources.
However, the internal dump format prevents more detailed understanding.
For example, there is no easy way for other people to figure out what
goes wrong during the optimization passes without knowing the dump
format. Having an example annotated debug output would be helpful to
make things more clear.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-25 11:06 ` Ihor Radchenko
@ 2023-08-25 14:26 ` Andrea Corallo
2023-08-26 11:14 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Andrea Corallo @ 2023-08-25 14:26 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Eli Zaretskii, ams, gregory, luangruo, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> Andrea Corallo <acorallo@gnu.org> writes:
>
>>> Do I understand correctly that value-type inference is still extremely
>>> limited?
>>
>> Why?
>
> Because when I tried to check if there is type optimization, I ran into
> that `lisp' + `listp' call that was not optimized.
>
> Are there other known instances of such missing inference?
This field is largely unexplored, probably when people will start paying
more attention to the inferred return type of lisp functions we will get
more bug reports for missed opportunities.
>> If we could have a bug report for this I can work on it as soon as I get
>> time.
>
> Done. https://debbugs.gnu.org/cgi/bugreport.cgi?bug=65527
Thanks
>>> Do you have any comment on the problem with having multiple parallel
>>> implementations of the same subroutine?
>>
>> It's not nice but if justified by performance for few core functions I
>> think is acceptable.
>
> I just thought if we could have this native compilation-specific
> implementations done in Elisp instead of C. AFAIU, it would then be
> inlined as needed just as a part of normal nativecomp optimizations. But
> the main question if it could be possible to retain C performance in the
> generic case when argument values cannot be inferred ahead of time.
No, that is not reasonable, CMUCL code as SBCL one when not micro
optimized are rather slow compared to C, still native compilation brings
a good boost of performance in the execution engine. Fact is that
often, the execution engine is not the perf bottle neck in our
application, usual suspects are runtime functions and GC off-course.
>>> Is there any detailed information about the format of native compile
>>> debug output?
>>
>> Not so far sorry, that's an internal dump format, do you have any
>> specific question?
>> ...
>> The compiler performs a series of transformations on the code, those are
>> called "passes". In the *Native-compile-Log* you can see the dump of
>> the code for each function being compiled in the current intermendiate
>> rapresentation. You'll see that the first intermediate rapresentation
>> is LAP, most of the following passes are dumped in LIMPLE.
>
> I have no questions about passes - they are described in your paper.
> Though it would be nice to put a reference to it in log buffer, manual,
> or even share the paper together with Emacs sources.
>
> However, the internal dump format prevents more detailed understanding.
> For example, there is no easy way for other people to figure out what
> goes wrong during the optimization passes without knowing the dump
> format. Having an example annotated debug output would be helpful to
> make things more clear.
Well if it helps the most important LIMPLE operators are AFAIR
documented in the paper you refer to.
I don't think I've now time to write more doc on this, but it should
pretty straight forward to compare the the output of the last LIMPLE
with what we emit as libgccjitIR to understand what's the meaning to
start digging into the subject.
Best Regards
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-25 14:26 ` Andrea Corallo
@ 2023-08-26 11:14 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-26 11:14 UTC (permalink / raw)
To: Andrea Corallo; +Cc: Eli Zaretskii, ams, gregory, luangruo, emacs-devel
Andrea Corallo <acorallo@gnu.org> writes:
>> Are there other known instances of such missing inference?
>
> This field is largely unexplored, probably when people will start paying
> more attention to the inferred return type of lisp functions we will get
> more bug reports for missed opportunities.
The question is how to discover such missed opportunities.
I ran into realization that list -> listp is not optimized by chance
only.
I am thinking if it could be worth implementing a pretty-printer for
the LIMPLE that will present the optimized code in a form more easily
understood by Elisp coders.
>> However, the internal dump format prevents more detailed understanding.
>> For example, there is no easy way for other people to figure out what
>> goes wrong during the optimization passes without knowing the dump
>> format. Having an example annotated debug output would be helpful to
>> make things more clear.
>
> Well if it helps the most important LIMPLE operators are AFAIR
> documented in the paper you refer to.
>
> I don't think I've now time to write more doc on this, but it should
> pretty straight forward to compare the the output of the last LIMPLE
> with what we emit as libgccjitIR to understand what's the meaning to
> start digging into the subject.
Thanks! I will read again, more carefully.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-25 9:19 ` Andrea Corallo
2023-08-25 11:06 ` Ihor Radchenko
@ 2023-08-27 1:40 ` Emanuel Berg
2023-08-27 7:38 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 1:40 UTC (permalink / raw)
To: emacs-devel
Andrea Corallo wrote:
> Yes the native compiler is failing to optimize that, one
> reason is probably that list is not a pure function.
> This works better for example with:
>
> (defun test2 ()
> (let ((x '(a b c)))
> (when (listp x) "Return value")))
Can we also see and play with type-value inference, to see how
that works in action?
And type hints for that matter?
The only trace I have been able to find of either are
functions in C, built-in functions in the Emacs lingo, for
example `+', if you do `describe-function' on that you see in
the docstring
Type: (function (&rest (or marker number)) number)
only that _isn't_ in the docstring - so maybe it is an inferred
function type then?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-27 1:40 ` Emanuel Berg
@ 2023-08-27 7:38 ` Emanuel Berg
2023-08-27 13:42 ` Andrea Corallo
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 7:38 UTC (permalink / raw)
To: emacs-devel
> The only trace I have been able to find of either are
> functions in C, built-in functions in the Emacs lingo, for
> example `+', if you do `describe-function' on that you see
> in the docstring
>
> Type: (function (&rest (or marker number)) number)
Lisp functions also get their types inferred, sometimes,
I see now, with the same method (the help).
Maybe function that are made up of functions that have their
types inferred also get their types inferred ...
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-27 7:38 ` Emanuel Berg
@ 2023-08-27 13:42 ` Andrea Corallo
2023-08-27 22:19 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Andrea Corallo @ 2023-08-27 13:42 UTC (permalink / raw)
To: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> The only trace I have been able to find of either are
>> functions in C, built-in functions in the Emacs lingo, for
>> example `+', if you do `describe-function' on that you see
>> in the docstring
>>
>> Type: (function (&rest (or marker number)) number)
>
> Lisp functions also get their types inferred, sometimes,
AFAIK all native compiled Lisp functions are type inferred.
> Maybe function that are made up of functions that have their
> types inferred also get their types inferred ...
Of course they are typed as well, but we can use the types of the called
functions for propagations as they can be redefined in every moment.
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-27 13:42 ` Andrea Corallo
@ 2023-08-27 22:19 ` Emanuel Berg
2023-08-28 5:04 ` Andrea Corallo
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 22:19 UTC (permalink / raw)
To: emacs-devel
Andrea Corallo wrote:
>>> The only trace I have been able to find of either are
>>> functions in C, built-in functions in the Emacs lingo, for
>>> example `+', if you do `describe-function' on that you see
>>> in the docstring
>>>
>>> Type: (function (&rest (or marker number)) number)
>>
>> Lisp functions also get their types inferred, sometimes,
>
> AFAIK all native compiled Lisp functions are type inferred.
You are right, those who don't have types are those that are
only byte-compiled! But then why are they not native-compiled
as well? I think the functions inside lexical let-closures, i.e.
(let (( ... ))
(defun ... ) )
are not natively compiled.
See the file that I yank last, none of those functions are or
get natively-compiled.
Note the `declare-function' lines; those are there so the
byte-compiler will not complain these functions are not
defined - which they are - so maybe the problem starts already
at the byte-compilation step.
>> Maybe function that are made up of functions that have
>> their types inferred also get their types inferred ...
>
> Of course they are typed as well, but we can use the types
> of the called functions for propagations as they can be
> redefined in every moment.
We can?
Please explain, let's say we have this
f(x) = a(b(c(x)))
a, b, and c are natively-compiled and type inferred, so the
type of f is known this way. But if a, b, and c are redefined
and their types are changed to the point f also has a type
change, how can we know that?
If type inference happens at native-compile time, how do we
know when executing f that, because a, b, and c has changed,
the old type of f no longer holds?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-27 22:19 ` Emanuel Berg
@ 2023-08-28 5:04 ` Andrea Corallo
2023-08-28 19:49 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Andrea Corallo @ 2023-08-28 5:04 UTC (permalink / raw)
To: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> Andrea Corallo wrote:
>
>>>> The only trace I have been able to find of either are
>>>> functions in C, built-in functions in the Emacs lingo, for
>>>> example `+', if you do `describe-function' on that you see
>>>> in the docstring
>>>>
>>>> Type: (function (&rest (or marker number)) number)
>>>
>>> Lisp functions also get their types inferred, sometimes,
>>
>> AFAIK all native compiled Lisp functions are type inferred.
>
> You are right, those who don't have types are those that are
> only byte-compiled! But then why are they not native-compiled
> as well? I think the functions inside lexical let-closures, i.e.
>
> (let (( ... ))
> (defun ... ) )
>
> are not natively compiled.
>
> See the file that I yank last, none of those functions are or
> get natively-compiled.
I can't find your code now but as of today Closures is the last bit we
don't compile.
>>> Maybe function that are made up of functions that have
>>> their types inferred also get their types inferred ...
>>
>> Of course they are typed as well, but we can use the types
>> of the called functions for propagations as they can be
>> redefined in every moment.
>
> We can?
Apologies for the typo, wanted to write _can't_.
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-28 5:04 ` Andrea Corallo
@ 2023-08-28 19:49 ` Emanuel Berg
0 siblings, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-28 19:49 UTC (permalink / raw)
To: emacs-devel
Andrea Corallo wrote:
>> You are right, those who don't have types are those that
>> are only byte-compiled! But then why are they not
>> native-compiled as well? I think the functions inside
>> lexical let-closures, i.e.
>>
>> (let (( ... ))
>> (defun ... ) )
>>
>> are not natively compiled.
>>
>> See the file that I yank last, none of those functions are
>> or get natively-compiled.
>
> I can't find your code now but as of today Closures is the
> last bit we don't compile.
I forgot to post it, but it doesn't matter, let-closures are
not not native-compiled, gotcha.
>>>> Maybe function that are made up of functions that have
>>>> their types inferred also get their types inferred ...
>>>
>>> Of course they are typed as well, but we can use the types
>>> of the called functions for propagations as they can be
>>> redefined in every moment.
>>
>> We can?
>
> Apologies for the typo, wanted to write _can't_.
Are they used for propagations at native-compile time?
Because how else is the function type inferred?
And is it true that the type inferred for a particular
function is only guaranteed to hold at native-compile time, if
that is when it happens?
Because if we have f(x) = a(b(c(x))) and a, b, and c can be
redefined at run time, how can we know the type of f at
run time?
Without again do native-compile of c, b, a, and f that is?
Are there functions to do inference so I can try them on
random functions?
BTW I like it how the type is expressed in Lisp B)
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-21 14:49 ` Add more supported primitives in libgccjit IR Andrea Corallo
2023-08-23 10:11 ` Ihor Radchenko
@ 2023-08-26 0:47 ` Emanuel Berg
2023-08-26 8:26 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-26 0:47 UTC (permalink / raw)
To: emacs-devel
Andrea Corallo wrote:
> Anyway to clarify:
>
> Yes the native compiler does value-type inference already
> (this is how the return type of functions is computed as
> well).
>
> Yes one can already type hint an object to the compiler,
> even if this is limited to cons and fixnums (making it
> generic is on my todo list).
>
> Yes would be great IMO to extend this mechanism to function
> arguments eventually as well (I might give it a go after
> summer?).
>
> Yes the backend tries to inline some code when possible (ex
> define_add1_sub1).
>
> Yes we could add more of this inlining, the infrastructure
> is already there but I personally had no time to work on
> this :(
>
> Yes would be great to work on this benchmark driven, even if
> this open the classic question of what is a set of
> rapresentative benchmarks.
>
> My next activity when my time is not used by maintenance and
> other activities of my life will be focused more on
> safetyness and correctness I think. I'd love to work 100% on
> Emacs but I must pay my bills like everyone :)
Thanks for this summary!
So we have - everything?
We have types that are implicit (inferred), explicit (type
hints), we already have a dynamically typed language but the
possibility to do type checks at compile time should be
possible, with subsequent pruning possible (the drop of
certain run time typechecks since types are already known), we
can do optimizations ourselves for certain identified areas
with the proliferation of functions to do the same thing but
for different types, but we can also do it in general, or on
a lower level, when transforming bytecode into natively
compiled machine instructions?
The only thing we don't have is money so we could hire Andrea
Corallo to work full time on really getting even more speed
out of all of these potential areas, where the ground work and
basic infrastructure is already there, just not all the
wonderful things one could build on top of them?
Question one, doesn't this blur the distinction between
statically typed languages and dynamically typed languages?
Because isn't the result a mix of the two?
Question two, so SBCL compiles directly into natively compiled
machine instructions, Elisp, with the byte compiler, compiles
into bytecode. What does the native compiler do with that
bytecode, it optimizes this for the native architecture?
Is this also a blur between the bytecode and "immediately to
machine instructions" modes? If so, how far is it from
the latter? And how close can it get, if work is done on the
byte compiler _and_ the native compiler?
Because to me it sounds like we can have the best of both
world, we can have complete portability with Emacs and Elisp,
faster portability with the byte compiler, and really fast
portability (code in execution) with the native compiler?
PS. As for money, as you are aware, there is a patron sponsor
scheme with people donating money. Maybe the FSF or
someone can say, hey, wanna be a patron? If so, we have
3 or so current projects we encourage you to support, one
is Mr Corallo's work on native compilation ...
PPS. If types can be value-inferred, at what point does this
happen? The byte compiler step? If so, how come it never
says, "hey, this is gonna be a type error". Maybe that
step hasn't been taken. It would be a cool feature IMO,
if realized.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-26 0:47 ` Emanuel Berg
@ 2023-08-26 8:26 ` Ihor Radchenko
2023-08-26 17:52 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-26 8:26 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> The only thing we don't have is money so we could hire Andrea
> Corallo to work full time on really getting even more speed
> out of all of these potential areas, where the ground work and
> basic infrastructure is already there, just not all the
> wonderful things one could build on top of them?
It is ultimately a question to Andrea, but I can say that Emacs users
are willing to provide some compensation - we, in Org mode, and at least
Magit are getting a non-insignificant amount of donations after we asked
for them. That's probably not enough to work full time, but certainly
can justify some time spend working on Emacs.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-26 8:26 ` Ihor Radchenko
@ 2023-08-26 17:52 ` Emanuel Berg
0 siblings, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-26 17:52 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> The only thing we don't have is money so we could hire
>> Andrea Corallo to work full time on really getting even
>> more speed out of all of these potential areas, where the
>> ground work and basic infrastructure is already there, just
>> not all the wonderful things one could build on top
>> of them?
>
> It is ultimately a question to Andrea, but I can say that
> Emacs users are willing to provide some compensation - we,
> in Org mode, and at least Magit are getting
> a non-insignificant amount of donations after we asked for
> them. That's probably not enough to work full time, but
> certainly can justify some time spend working on Emacs.
Sounds good!
So let's ask for it on behalf of Andrea then.
And yes, if it is money it has to be "a non-insignificant
amount" as you say, since otherwise it cannot change the daily
routine and situation, really, but the person will still feel
obliged to contribute more.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 10:36 ` Ihor Radchenko
2023-08-21 11:02 ` Alfred M. Szmidt
@ 2023-08-21 11:05 ` Gregory Heytings
2023-08-21 11:46 ` Ihor Radchenko
2023-08-21 11:34 ` Add more supported primitives in libgccjit IR Manuel Giraud via Emacs development discussions.
2 siblings, 1 reply; 247+ messages in thread
From: Gregory Heytings @ 2023-08-21 11:05 UTC (permalink / raw)
To: Ihor Radchenko
Cc: Po Lu, Andrea Corallo, Eli Zaretskii, ams, incal, emacs-devel
>> I'm not sure elisp-benchmarks are representative enough of actual Elisp
>> code...
>
> Any better ideas?
>
make check, for example.
>> Look at data.c:arith_driver. You'll see that it's essentially a
>> function which dispatches the handling of its arguments depending on
>> their type...
>>
>> These integer/float/bignum types are not known at compilation time ...
>
> This is not correct. If you have something like (progn (setq x 1) (> x
> 2)), compiler is actually able to determine the type of X at compilation
> time.
>
For such a trivial example, yes, and probably for slightly more complex
examples too. But in general, no. Take this equally trivial example:
(setq a most-positive-fixnum)
(setq a (1+ a))
The 'a' object was an integer, and became a bignum.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 11:05 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Gregory Heytings
@ 2023-08-21 11:46 ` Ihor Radchenko
2023-08-21 12:33 ` Gregory Heytings
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 11:46 UTC (permalink / raw)
To: Gregory Heytings
Cc: Po Lu, Andrea Corallo, Eli Zaretskii, ams, incal, emacs-devel
Gregory Heytings <gregory@heytings.org> writes:
> For such a trivial example, yes, and probably for slightly more complex
> examples too. But in general, no. Take this equally trivial example:
>
> (setq a most-positive-fixnum)
> (setq a (1+ a))
>
> The 'a' object was an integer, and became a bignum.
I understand. I doubt that Emacs native compiler checks this far.
However, such checks are definitely doable - integer bounds are not
unknown and can certainly be handled.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 11:46 ` Ihor Radchenko
@ 2023-08-21 12:33 ` Gregory Heytings
0 siblings, 0 replies; 247+ messages in thread
From: Gregory Heytings @ 2023-08-21 12:33 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Po Lu, Andrea Corallo, Eli Zaretskii, ams, emacs-devel
>> Take this equally trivial example:
>>
>> (setq a most-positive-fixnum)
>> (setq a (1+ a))
>>
>> The 'a' object was an integer, and became a bignum.
>
> I understand. I doubt that Emacs native compiler checks this far.
> However, such checks are definitely doable - integer bounds are not
> unknown and can certainly be handled.
>
They cannot, certainly not in general. In the trivial example above the
compiler could know that 'a' is most-positive-fixnum. But if you have a
'1+' somewhere in an actual piece of code, you cannot, except in rare
cases, know at compile time its argument (whose value may, for example,
depend on the value of a function argument) is an integer and that adding
1 to that integer will not overflow.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-21 10:36 ` Ihor Radchenko
2023-08-21 11:02 ` Alfred M. Szmidt
2023-08-21 11:05 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Gregory Heytings
@ 2023-08-21 11:34 ` Manuel Giraud via Emacs development discussions.
2 siblings, 0 replies; 247+ messages in thread
From: Manuel Giraud via Emacs development discussions. @ 2023-08-21 11:34 UTC (permalink / raw)
To: Ihor Radchenko
Cc: Gregory Heytings, Po Lu, Andrea Corallo, Eli Zaretskii, ams,
incal, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> Gregory Heytings <gregory@heytings.org> writes:
>
>> I'm not sure elisp-benchmarks are representative enough of actual Elisp
>> code...
>
> Any better ideas?
>
>> Look at data.c:arith_driver. You'll see that it's essentially a function
>> which dispatches the handling of its arguments depending on their type...
>>
>> These integer/float/bignum types are not known at compilation time ...
>
> This is not correct. If you have something like
> (progn (setq x 1) (> x 2)), compiler is actually able to determine the
> type of X at compilation time.
Yes this is called type inference. There is a full literature on the
subject. Even a whole family of language based on this concept.
AFAIK, the SBCL compiler does some type inference but, as others have
said, I don't think there is such a thing for Emacs Lisp.
--
Manuel Giraud
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR (was: Shrinking the C core)
2023-08-21 9:42 ` Gregory Heytings
2023-08-21 10:36 ` Ihor Radchenko
@ 2023-08-21 11:02 ` Alfred M. Szmidt
1 sibling, 0 replies; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 11:02 UTC (permalink / raw)
To: Gregory Heytings; +Cc: yantar92, luangruo, akrl, eliz, incal, emacs-devel
So, in a statically typed language, adding two integers takes a
single CPU cycle. In a dynamically typed language, it can take
many CPU cycles. And of course, using a JIT compiler does not
magically transform a dynamically typed language into a statically
typed one: you still need to do these dynamic dispatches.
To add to that, in SBCL, these checks can be inlined, and optimized
out (since SBCL has more information about what it has to do with --
the compiler has access to literally everything it produces and runs,
which is not the case in Emacs). So something like FLOOR can be
reduced to just doing what is needed (is it of the promised type? If
not error, otherwise call the specialized code for doing FLOOR), so
the code path becomes much smaller.
Like with the arith_driver example, what one would maybe like to do is
jump directly to the case one needs instead of going through multiple
checks. And who knows how much or little that would matter. It is
just one of bazillion differences between SBCL and Emacs.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-21 9:17 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Ihor Radchenko
2023-08-21 9:42 ` Gregory Heytings
@ 2023-08-21 11:12 ` Eli Zaretskii
2023-08-21 11:53 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 11:12 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: luangruo, akrl, ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Eli Zaretskii <eliz@gnu.org>, ams@gnu.org, incal@dataswamp.org,
> emacs-devel@gnu.org
> Date: Mon, 21 Aug 2023 09:17:15 +0000
>
> Po Lu <luangruo@yahoo.com> writes:
>
> > Ihor Radchenko <yantar92@posteo.net> writes:
> >
> >> But let me rephrase it in other terms: what you propose will require
> >> maintaining two separate implementations of subroutines - one in C, and
> >> one specially tailored to GCC JIT psudocode. This may be doable for a
> >> small set of core primitives, but not scalable if we want to make more
> >> subroutines benefit from GGC JIT optimizations.
> >
> > I'm inclined to believe that type checks within those more complex
> > functions do not contribute so much to the runtime of most
> > native-compiled functions as the small set of arithmetic primitives do.
>
> I am pretty sure that it depends on the specific use case.
> On average, you might be right though.
>
> Just to get something going, I executed
> https://elpa.gnu.org/packages/elisp-benchmarks.html benchmarks and
> looked into the primitives that take significant amount of time:
>
> 3.85% emacs emacs [.] arith_driver
> 2.62% emacs emacs [.] Fgtr
> 2.31% emacs emacs [.] check_number_coerce_marker
> 2.24% emacs emacs [.] Fmemq
> 2.20% emacs emacs [.] Flss
> 1.56% emacs emacs [.] arithcompare
> 1.12% emacs emacs [.] Faset
> 1.10% emacs emacs [.] Fcar_safe
> 0.97% emacs emacs [.] Faref
> 0.94% emacs emacs [.] Fplus
> 0.93% emacs emacs [.] float_arith_driver
> 0.58% emacs emacs [.] Feqlsign
>
> We may consider directly supporting some of these functions in native
> compile libgccjit IR code to get rid of runtime type checks.
Didn't you just explained, above, how this would create separate
versions of the same code that work differently, and how that should
be avoided?
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Add more supported primitives in libgccjit IR
2023-08-21 11:12 ` Add more supported primitives in libgccjit IR Eli Zaretskii
@ 2023-08-21 11:53 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 11:53 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: luangruo, akrl, ams, incal, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
>> We may consider directly supporting some of these functions in native
>> compile libgccjit IR code to get rid of runtime type checks.
>
> Didn't you just explained, above, how this would create separate
> versions of the same code that work differently, and how that should
> be avoided?
An alternative could be porting out the type checking to Elisp.
Then, the compiler will be able to cut off some branches when type
information is available at compile time.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 5:34 ` Po Lu
2023-08-21 9:17 ` Add more supported primitives in libgccjit IR (was: Shrinking the C core) Ihor Radchenko
@ 2023-08-27 2:04 ` Emanuel Berg
1 sibling, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 2:04 UTC (permalink / raw)
To: emacs-devel
Po Lu wrote:
> I'm inclined to believe that type checks within those more
> complex functions do not contribute so much to the runtime
> of most native-compiled functions as the small set of
> arithmetic primitives do.
Very much so, one should focus on the small ones, that way the
big one will be faster as well.
The set of arithmetic primitives sounds like a good idea to
cover first.
But apart from them a primitive function is a function written
in C, but callable from Lisp, AKA what the help calls
a built-in function. So there are quite a lot of those!
One can test for primitive functions like this:
(subrp (symbol-function #'+)) ; t
(info "(elisp) Primitive Function Type")
https://www.gnu.org/software/emacs/manual/html_node/elisp/Primitive-Function-Type.html
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 5:06 ` Ihor Radchenko
2023-08-21 5:25 ` [External] : " Drew Adams
2023-08-21 5:34 ` Po Lu
@ 2023-08-21 7:59 ` Gregory Heytings
2023-08-27 5:31 ` Emanuel Berg
3 siblings, 0 replies; 247+ messages in thread
From: Gregory Heytings @ 2023-08-21 7:59 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Po Lu, Eli Zaretskii, ams, emacs-devel, incal
>
> But let me rephrase it in other terms: what you propose will require
> maintaining two separate implementations of subroutines - one in C, and
> one specially tailored to GCC JIT pseudocode.
>
Three, in fact. 'car' is defined:
- in data.c: DEFUN ("car", Fcar, ...
- in bytecode.c: CASE (Bcar): ...
- in comp.c: static gcc_jit_rvalue * emit_XCAR ... and static gcc_jit_lvalue * emit_lval_XCAR ...
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 5:06 ` Ihor Radchenko
` (2 preceding siblings ...)
2023-08-21 7:59 ` Gregory Heytings
@ 2023-08-27 5:31 ` Emanuel Berg
2023-08-27 6:16 ` Emanuel Berg
3 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 5:31 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
> For example, Ffloor could be (1) split into smaller
> functions dedicated to certain argument type combinations;
> (2) record a metadata readable by native comp code about
> which small function correspond to different argument types.
> Then, native comp can emit direct calls to these smaller
> (and faster) functions when the type is known.
Yes, value-type inference in Elisp and then several functions
in C - based on type - to do the same thing.
Sounds like a good strategy, assuming type checks in C are
actually what makes Elisp slow.
I have now native compiled my Elisp with `native-comp-speed'
set to 3. It is about 100 files, but they require a lot of
other files so all in all located in eln-cache after this step
were just below 500 files. (BTW, what are the .tmp files in
said directory?)
I don't know about you guys but the intuition regarding
interactive feel is that it is _very_ fast!
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 4:11 ` Ihor Radchenko
2023-08-21 4:15 ` Po Lu
@ 2023-08-21 10:48 ` Eli Zaretskii
2023-08-21 11:56 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 10:48 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: ams@gnu.org, incal@dataswamp.org, emacs-devel@gnu.org
> Date: Mon, 21 Aug 2023 04:11:47 +0000
>
> > Cut them how? AFAICT, none of the tests above are redundant.
>
> Consider the following:
>
> (let ((a 10))
> (setq a (+ a 100))
> (floor a nil))
>
> During compilation of the above code, the compiler will know that a is a
> positive integer.
It will? What happens if a overflows?
> Therefore, CHECK_NUMBER, NILP, and FLOATP are not
> necessary and can be omitted in the call to `floor':
If you want to program in C or Fortran, then program in C or Fortran.
Lisp is an interpreted environment that traditionally includes safety
nets. People actually complain to us, and rightfully so, when Emacs
crashes or produces corrupted results instead if signaling an error
pointing out invalid input or other run-time problems.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 10:48 ` Eli Zaretskii
@ 2023-08-21 11:56 ` Ihor Radchenko
2023-08-21 12:22 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 11:56 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: ams, incal, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
>> (let ((a 10))
>> (setq a (+ a 100))
>> (floor a nil))
>>
>> During compilation of the above code, the compiler will know that a is a
>> positive integer.
>
> It will? What happens if a overflows?
It will not, right? Because we do know all the values at compile time in
the above example. I am not sure if we can as far as checking the value
range at compile time, but it is at least theoretically possible.
>> Therefore, CHECK_NUMBER, NILP, and FLOATP are not
>> necessary and can be omitted in the call to `floor':
>
> If you want to program in C or Fortran, then program in C or Fortran.
> Lisp is an interpreted environment that traditionally includes safety
> nets. People actually complain to us, and rightfully so, when Emacs
> crashes or produces corrupted results instead if signaling an error
> pointing out invalid input or other run-time problems.
I did not mean to disable checks. I just meant that when the types and
possibly value ranges are known at compile time, these checks can be
safely omitted. Without compromising safety.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 11:56 ` Ihor Radchenko
@ 2023-08-21 12:22 ` Eli Zaretskii
0 siblings, 0 replies; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 12:22 UTC (permalink / raw)
To: Ihor Radchenko, Andrea Corallo; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: ams@gnu.org, incal@dataswamp.org, emacs-devel@gnu.org
> Date: Mon, 21 Aug 2023 11:56:20 +0000
>
> Eli Zaretskii <eliz@gnu.org> writes:
>
> >> (let ((a 10))
> >> (setq a (+ a 100))
> >> (floor a nil))
> >>
> >> During compilation of the above code, the compiler will know that a is a
> >> positive integer.
> >
> > It will? What happens if a overflows?
>
> It will not, right? Because we do know all the values at compile time in
> the above example.
In toy programs, perhaps. But not in real life. We want to be able
to write real-life programs in Lisp, not just toy ones.
> > If you want to program in C or Fortran, then program in C or Fortran.
> > Lisp is an interpreted environment that traditionally includes safety
> > nets. People actually complain to us, and rightfully so, when Emacs
> > crashes or produces corrupted results instead if signaling an error
> > pointing out invalid input or other run-time problems.
>
> I did not mean to disable checks. I just meant that when the types and
> possibly value ranges are known at compile time, these checks can be
> safely omitted. Without compromising safety.
Not in ELisp, they cannot. Someone already explained why.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:24 ` Ihor Radchenko
2023-08-21 2:33 ` Eli Zaretskii
@ 2023-08-28 4:41 ` Emanuel Berg
2023-08-28 11:27 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-28 4:41 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
>> The only problem (AFAIU) is that GCC JIT cannot reach
>> inside subr level, so all these information does not
>> benefit Emacs functions implemented in C.
>
> If I am right here, it might actually be worth it to rewrite
> some of the subroutines into Elisp. For example
> rounding_driver (called by `floor') code is full of runtime
> type checks:
>
> CHECK_NUMBER (n);
> if (NILP (d))
> ...
> CHECK_NUMBER (d);
> ...
> if (FIXNUMP (d))
> if (XFIXNUM (d) == 0)
> ...
> if (FIXNUMP (n))
> ...
> else if (FLOATP (d))
> if (XFLOAT_DATA (d) == 0)
> int nscale = FLOATP (n) ? double_integer_scale (XFLOAT_DATA (n)) : 0;
> ..
>
> During native compilation, if type information and n and
> d is available, GCC might have a chance to cut a number of
> branches away from the above code.
Does this indicate a tendency where one can foresee a future
where Elisp is as fast as C to the point C could be
dropped completely?
Even today we can run singular Elisp programs. But not without
Emacs and its Lisp interpreter, which is written in C.
Still, I wonder if those typechecks in C really slow things
down to the point it matters. Maybe for really huge
number-crunching computations?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-28 4:41 ` Emanuel Berg
@ 2023-08-28 11:27 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-28 11:27 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> During native compilation, if type information and n and
>> d is available, GCC might have a chance to cut a number of
>> branches away from the above code.
>
> Does this indicate a tendency where one can foresee a future
> where Elisp is as fast as C to the point C could be
> dropped completely?
No. Low-level memory management must still be in C. And some
performance-critical that uses internals for efficiency.
For example, `transpose-regions' directly modifies internal buffer array
holding byte stream of the buffer text, suing memcpy.
There is no way this low-level structure is exposed to Elisp level.
(Otherwise, bad Elisp can simply crash Emacs).
> Still, I wonder if those typechecks in C really slow things
> down to the point it matters. Maybe for really huge
> number-crunching computations?
As with many other things, it depends.
We saw for bignums that typechecks are taking most time.
Typechecks also take significant time in fib benchmarks.
However, for example, Org parser spends most of the time in Emacs regexp
engine, not in typechecks.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:15 ` Ihor Radchenko
2023-08-20 19:24 ` Ihor Radchenko
@ 2023-08-20 20:15 ` Alfred M. Szmidt
2023-08-20 20:39 ` Ihor Radchenko
2023-08-27 4:01 ` Emanuel Berg
2 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 20:15 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
> Then, what does GCC do? AFAIK, GCC JIT takes the Elisp byte code,
> transforms it into JIT pseudocode, and optimizes the actual code flow.
>
> What does GCC do _WHERE_? What backend? What language? You're
> speaking in such broad terms that it makes it impossible to continue
> this discussion. I don't know how the native compilation works, but
> no matter what you feed to GCC it cannot do magic and any optimization
> should be done on what the Emacs compiler does.
Native compilation provides the necessary information about Elisp to GCC.
Native compilation provides nothing of the sort.
Otherwise, native compilation would be useless.
Native compilation removes the indirection of going through the VM,
that is a useful step. It also provides the JIT.
SBCL does transformation of Lisp code, there is a huge difference
there that clearly is being ignored here.
> That is the type of information SBCL knows about, or allows the user
> to specify. Emacs does not have that today, and that incures one set
> of overhead. There are plenty more...
AFAIK, users cannot specify type info manually, but types are tracked
when transforming Elisp byte code into LIMP representation.
You cannot track type information in a dynamically typed language
without providing hints, something Emacs lisp does not do.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 20:15 ` Alfred M. Szmidt
@ 2023-08-20 20:39 ` Ihor Radchenko
2023-08-21 5:59 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 20:39 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> SBCL does transformation of Lisp code, there is a huge difference
> there that clearly is being ignored here.
May you elaborate what you mean by transformation?
> AFAIK, users cannot specify type info manually, but types are tracked
> when transforming Elisp byte code into LIMP representation.
>
> You cannot track type information in a dynamically typed language
> without providing hints, something Emacs lisp does not do.
https://zenodo.org/record/3736363, Section 3.4 forward data-flow
analysis.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 20:39 ` Ihor Radchenko
@ 2023-08-21 5:59 ` Alfred M. Szmidt
2023-08-21 6:23 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 5:59 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
> SBCL does transformation of Lisp code, there is a huge difference
> there that clearly is being ignored here.
May you elaborate what you mean by transformation?
Dead code eliminiation for example, the Emacs Lisp commpiler doesn't
do anything with dead code.
> AFAIK, users cannot specify type info manually, but types are tracked
> when transforming Elisp byte code into LIMP representation.
>
> You cannot track type information in a dynamically typed language
> without providing hints, something Emacs lisp does not do.
https://zenodo.org/record/3736363, Section 3.4 forward data-flow
analysis.
Which has nothing to do with Emacs Lisp. Emacs Lisp lacks basic means
of instructing the compiler, for example ... stating what the function
return type is.
JIT is primarily about execution speed, not about optimizing already
existing slow code which Emacs has lots of. For that you need a
better compiler, and people optimizing the code accordingly.
That is why SBCL is faster, since that is the only thing they do with
SBCL.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 5:59 ` Alfred M. Szmidt
@ 2023-08-21 6:23 ` Ihor Radchenko
2023-08-21 7:21 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 6:23 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> https://zenodo.org/record/3736363, Section 3.4 forward data-flow
> analysis.
>
> Which has nothing to do with Emacs Lisp. Emacs Lisp lacks basic means
> of instructing the compiler, for example ... stating what the function
> return type is.
>
> JIT is primarily about execution speed, not about optimizing already
> existing slow code which Emacs has lots of. For that you need a
> better compiler, and people optimizing the code accordingly.
We are miscommunicating.
I do not agree that native compilation has nothing to do with Emacs
Lisp. src/comp.c and lisp/emacs-lisp/native.el gather the information,
among other things, about the Elisp function return types and function
code flow, and later provide it to GCC JIT. Then, GCC JIT uses
state-of-art compiler (GCC) to optimize the instruction graph and
convert it to native code. This optimization includes removing the dead
code (AFAIR, this was one of the examples provided in the talk and paper
I linked to earlier).
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 6:23 ` Ihor Radchenko
@ 2023-08-21 7:21 ` Alfred M. Szmidt
2023-08-21 7:26 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 7:21 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
If you cannot see the difference between optimizing byte code, and
optimizing Lisp code, I'll find something else to do.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 7:21 ` Alfred M. Szmidt
@ 2023-08-21 7:26 ` Ihor Radchenko
2023-08-21 7:52 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 7:26 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> If you cannot see the difference between optimizing byte code, and
> optimizing Lisp code, I'll find something else to do.
I am talking about the end result (native code) we achieve after
converting source Elisp into byte-code and then into native code. Not
about the byte code.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 7:26 ` Ihor Radchenko
@ 2023-08-21 7:52 ` Alfred M. Szmidt
2023-08-21 10:46 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 7:52 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> If you cannot see the difference between optimizing byte code, and
> optimizing Lisp code, I'll find something else to do.
I am talking about the end result (native code) we achieve after
converting source Elisp into byte-code and then into native code. Not
about the byte code.
The end result depends on what the Emacs Lisp compiler produces,
Native compilation will not figure out that using ASSQ is better when
calling ASSOC has fixnums in it (see byte-opt.el for example of the
required Lisp wrangling that is required -- and something that SBCL
does in a much larger scale).
That is the type of optimizations that matter more than JIT.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 7:52 ` Alfred M. Szmidt
@ 2023-08-21 10:46 ` Ihor Radchenko
2023-08-21 11:02 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 10:46 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: eliz, incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
>> I am talking about the end result (native code) we achieve after
>> converting source Elisp into byte-code and then into native code. Not
>> about the byte code.
>
> The end result depends on what the Emacs Lisp compiler produces,
> Native compilation will not figure out that using ASSQ is better when
> calling ASSOC has fixnums in it (see byte-opt.el for example of the
> required Lisp wrangling that is required -- and something that SBCL
> does in a much larger scale).
>
> That is the type of optimizations that matter more than JIT.
Native compilation can actually do it. And it can (AFAIU) do it more
efficiently compared to what we have in byte-opt.el, because it uses
more sophisticated data flow analysis to derive type information.
If we look into Fassoc implementation, it starts with
if (eq_comparable_value (key) && NILP (testfn))
return Fassq (key, alist);
...
eq_comparable_value (Lisp_Object x)
{
return SYMBOLP (x) || FIXNUMP (x);
}
If we have a "libgccjit IR" implementation for Fassoc (see "3.8final (code
layout) in https://zenodo.org/record/3736363", the type information can
transform assoc call into assq call during compile time.
The other question is that `assoc' in particular is currently not
implemented in "libgccjit IR". But it can be added, together with other
important primitives.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 10:46 ` Ihor Radchenko
@ 2023-08-21 11:02 ` Alfred M. Szmidt
0 siblings, 0 replies; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 11:02 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
>> I am talking about the end result (native code) we achieve after
>> converting source Elisp into byte-code and then into native code. Not
>> about the byte code.
>
> The end result depends on what the Emacs Lisp compiler produces,
> Native compilation will not figure out that using ASSQ is better when
> calling ASSOC has fixnums in it (see byte-opt.el for example of the
> required Lisp wrangling that is required -- and something that SBCL
> does in a much larger scale).
>
> That is the type of optimizations that matter more than JIT.
Native compilation can actually do it. And it can (AFAIU) do it more
efficiently compared to what we have in byte-opt.el, because it uses
more sophisticated data flow analysis to derive type information.
I said _figure out_ -- that is something for a human to do. You can
put the optimization where ever you want, native compilation cannot
figure out things magically. It is what you are essentially arguing,
that native compilation fixes every problem. Native compilation
depends on what the Emacs compiler produces at the end of the day more
than what simple optimizations you can do.
If the Emacs compiler produces good code, which it does not do today,
then native compilation will also produce better code.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:15 ` Ihor Radchenko
2023-08-20 19:24 ` Ihor Radchenko
2023-08-20 20:15 ` Alfred M. Szmidt
@ 2023-08-27 4:01 ` Emanuel Berg
2023-08-27 8:53 ` Ihor Radchenko
2 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 4:01 UTC (permalink / raw)
To: emacs-devel
Ihor Radchenko wrote:
> AFAIK, users cannot specify type info manually, but types
> are tracked when transforming Elisp byte code into
> LIMP representation.
What is LIMP?
> The only problem (AFAIU) is that GCC JIT cannot reach inside
> subr level, so all these information does not benefit Emacs
> functions implemented in C.
But surely that code is fast enough?
Well, I guess optimally one would want to optimize everything
including Emacs' C.
Or do you mean it obstructs the optimization of our Elisp
as well?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-27 4:01 ` Emanuel Berg
@ 2023-08-27 8:53 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-27 8:53 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> Ihor Radchenko wrote:
>
>> AFAIK, users cannot specify type info manually, but types
>> are tracked when transforming Elisp byte code into
>> LIMP representation.
>
> What is LIMP?
AFAIU, native compilation code transforms Elisp into intermediate
representation called LIMPLE. I recommend reading
https://zenodo.org/record/3736363 where the native compilation process
is described in details. Because my own understanding is limited and I
may use terms not accurately.
>> The only problem (AFAIU) is that GCC JIT cannot reach inside
>> subr level, so all these information does not benefit Emacs
>> functions implemented in C.
>
> But surely that code is fast enough?
It is fast, but it could be faster if native compilation could cut off
some code branches inside the C code.
> Well, I guess optimally one would want to optimize everything
> including Emacs' C.
>
> Or do you mean it obstructs the optimization of our Elisp
> as well?
Let me demonstrate what I mean by example.
Consider a simple Elisp like
(let ((foo '(a b c))) (length foo))
`length' is defined in C like:
EMACS_INT val;
if (STRINGP (sequence))
val = SCHARS (sequence);
....
else if (CONSP (sequence))
val = list_length (sequence);
....
else
wrong_type_argument (Qsequencep, sequence);
return make_fixnum (val);
In theory, if we had full information, we could just optimize the
initial Elisp to
make_fixnum (list_length('(a b c)))
or even to just
3
as '(a b c) value is known at compile time.
However, despite knowing that the value of foo is a list constant at
compile time, native compilation code has no access to the
implementation details of `length' - it is a black box for native
compiler. So, we cannot perform the above optimization.
The workaround that is currently used is using special compile-time
expander for select C functions (like for `+' - `define_add1_sub1')
However, as we discussed earlier, this leads to multiple implementations
of the same function and makes maintenance more difficult.
So, only some very important C functions can be expanded like this.
For now, AFAIU, only `+' and `-' have native compiler expansion defined.
Other expansions are yet to be implemented.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 18:54 ` Alfred M. Szmidt
2023-08-20 19:07 ` Eli Zaretskii
2023-08-20 19:15 ` Ihor Radchenko
@ 2023-08-27 3:48 ` Emanuel Berg
2023-08-27 9:06 ` Ihor Radchenko
2 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 3:48 UTC (permalink / raw)
To: emacs-devel
Alfred M. Szmidt wrote:
>> For example, when I write
>>
>> (when (> x y) (when (> x y) x))
>>
>> I expect GCC JIT to throw away the duplicate comparison.
>
> Why do you expect that? Why do you think it is duplicate?
> Where are the guarantees that > or WHEN don't have
> side-effects? Do you know the exact type of X and Y so you
> can skip a cascade of type checks to pick the right
> comparison operator? Can you use fixnum comparison of
> a specific bit width? Do you need to use bignum comparison?
>
> That is the type of information SBCL knows about, or allows
> the user to specify. Emacs does not have that today [...]
Why can SBCL answer those questions based on the sample code,
and not Emacs? What is it that they have, and we don't?
And why can't we have it as well?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-27 3:48 ` Emanuel Berg
@ 2023-08-27 9:06 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-27 9:06 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> That is the type of information SBCL knows about, or allows
>> the user to specify. Emacs does not have that today [...]
>
> Why can SBCL answer those questions based on the sample code,
> and not Emacs? What is it that they have, and we don't?
> And why can't we have it as well?
SBCL has a lot more written using low-level CL code. Which makes all the
type inference and other optimizations available to the compiler.
However, SBCL _interpreter_ is much slower compared to Emacs'.
AFAIU, the reason Emacs interpreter is faster is that the most important
performance-critical primitives are written directly in C. However, for
the same reason, native compilation in Emacs cannot optimize the code as
much. It is a trade off.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:45 ` Eli Zaretskii
2023-08-20 15:54 ` Ihor Radchenko
@ 2023-08-27 3:25 ` Emanuel Berg
2023-08-27 8:55 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-27 3:25 UTC (permalink / raw)
To: emacs-devel
Eli Zaretskii wrote:
> Native compilation doesn't affect 'car', because it's
> a primitive.
How fast is our C? We need to optimize that as well? And isn't
it already compiled for the native architecture?
> It's very easy to see the code of 'car' in Emacs. All you
> need is run GDB:
>
> $ gdb ./emacs
> ...
> (gdb) disassemble /m Fcar
Or do `C-h f car RET TAB RET' to follow the hyperlink to data.c.
It takes you to Fcar at line 614. Fcar however is only in C,
so not a primitive then.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-27 3:25 ` Emanuel Berg
@ 2023-08-27 8:55 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-27 8:55 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
>> $ gdb ./emacs
>> ...
>> (gdb) disassemble /m Fcar
>
> Or do `C-h f car RET TAB RET' to follow the hyperlink to data.c.
>
> It takes you to Fcar at line 614. Fcar however is only in C,
> so not a primitive then.
It is a Elisp primitive. Please check out Elisp manual:
2.4.15 Primitive Function Type
------------------------------
A “primitive function” is a function callable from Lisp but written in
the C programming language. Primitive functions are also called “subrs”
or “built-in functions”. (The word “subr” is derived from
“subroutine”.)
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:36 ` Ihor Radchenko
2023-08-20 15:45 ` Eli Zaretskii
@ 2023-08-20 16:03 ` Alfred M. Szmidt
2023-08-20 16:34 ` Ihor Radchenko
2023-08-20 19:14 ` Eli Zaretskii
2 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 16:03 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> Please keep the CC intact, not everyone subscribed.
>
> > It should be quite obvious why SBCL is faster than the Emacs
> > Lisp VM (or even native). Just look at this call to (car
> > "foo"), and compare what happens in Emacs.
> >
> > * (disassemble 'foo)
> > ; disassembly for FOO
> > ; Size: 166 bytes. Origin: #x225D873F ; FOO
>> ...
> Okay?
>
> I guess that you do not understand the above? Or what? Do you know
> and understand what happens in Emacs when a similar call is done? It
> is far more than "166 bytes".
It would be helpful if you show us what happens in Elisp with a
similar call. Especially after native compilation.
I'll suggest that you try to figure it out, it is a good exercise.
But the big difference is that there is much more indirection between
what SBCL does and what Emacs Lisp does. SBCL is a much more
aggressive optimizer of code. Emacs simply cannot optimize much of
the call _flow_.
And then you will generally either have byte compiler, or interpreted
code to handle (ignoring native compile, since must people probobly
still use the VM) -- all code in SBCL is comnpiled (EVAL is essentially
a call to the compiler, and then executed).
As an idea, I would take the Gabriel benchmarks and run them in SBCL
vs. Emacs. Take one, and investigate what they do in detail... You
will see that the two worlds are universes far apart.
I am asking genuinely because `car' (1) has dedicated opt code and thus
should be one of the best-optimized function calls on Elisp side; (2)
Fcar is nothing but
/* Take the car or cdr of something whose type is not known. */
INLINE Lisp_Object
CAR (Lisp_Object c)
{
if (CONSP (c))
return XCAR (c); // <- XCONS (c)->u.s.car
if (!NILP (c))
wrong_type_argument (Qlistp, c);
return Qnil;
}
So, it is a very simple example that can actually explain the basic
differences between Elisp and CL. It would be nice if you (considering
your low-level understanding) can provide us with an analysis of what is
different between Elisp and CL implementations of such a simple
function.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 16:03 ` Alfred M. Szmidt
@ 2023-08-20 16:34 ` Ihor Radchenko
2023-08-20 17:19 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 16:34 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> I'll suggest that you try to figure it out, it is a good exercise.
> But the big difference is that there is much more indirection between
> what SBCL does and what Emacs Lisp does. SBCL is a much more
> aggressive optimizer of code. Emacs simply cannot optimize much of
> the call _flow_.
Ok. Here is what I got for Elisp `car':
Dump of assembler code for function Fcar:
Address range 0x200250 to 0x20026c:
0x0000000000200250 <+0>: lea -0x3(%rdi),%eax
0x0000000000200253 <+3>: test $0x7,%al
0x0000000000200255 <+5>: jne 0x200260 <Fcar+16>
0x0000000000200257 <+7>: mov -0x3(%rdi),%rax
0x000000000020025b <+11>: ret
0x000000000020025c <+12>: nopl 0x0(%rax)
0x0000000000200260 <+16>: test %rdi,%rdi
0x0000000000200263 <+19>: jne 0x5007d <Fcar.cold>
0x0000000000200269 <+25>: xor %eax,%eax
0x000000000020026b <+27>: ret
Address range 0x5007d to 0x5008b:
0x000000000005007d <-1769939>: push %rax
0x000000000005007e <-1769938>: mov %rdi,%rsi
0x0000000000050081 <-1769935>: mov $0xaf20,%edi
0x0000000000050086 <-1769930>: call 0x4ffc7 <wrong_type_argument>
Does not look too bad in terms of the number of instructions. And I do
not see any obvious indirection.
> And then you will generally either have byte compiler, or interpreted
> code to handle (ignoring native compile, since must people probobly
> still use the VM) -- all code in SBCL is comnpiled (EVAL is essentially
> a call to the compiler, and then executed).
IMHO, we should not ignore native compile. If we want to improve the
peak performance of Elisp, native compile should be essential part of
it. Then, for real improvements, we should better focus on what native
compile cannot optimize.
> As an idea, I would take the Gabriel benchmarks and run them in SBCL
> vs. Emacs. Take one, and investigate what they do in detail... You
> will see that the two worlds are universes far apart.
Sure. That's what I asked Emanuel to do.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 16:34 ` Ihor Radchenko
@ 2023-08-20 17:19 ` Alfred M. Szmidt
2023-08-20 17:25 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 17:19 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: incal, emacs-devel
Does not look too bad in terms of the number of instructions. And I do
not see any obvious indirection.
The Emacs VM will incure a switch to C for each call, SBCL doesn't.
You really cannot see the difference that makes?
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 17:19 ` Alfred M. Szmidt
@ 2023-08-20 17:25 ` Ihor Radchenko
2023-08-20 18:54 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 17:25 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> Does not look too bad in terms of the number of instructions. And I do
> not see any obvious indirection.
>
> The Emacs VM will incure a switch to C for each call, SBCL doesn't.
> You really cannot see the difference that makes?
May you elaborate what you mean by "switch to C"?
Emacs VM is running in C already.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 17:25 ` Ihor Radchenko
@ 2023-08-20 18:54 ` Alfred M. Szmidt
2023-08-20 19:02 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 18:54 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: incal, emacs-devel
> Does not look too bad in terms of the number of instructions. And I do
> not see any obvious indirection.
>
> The Emacs VM will incure a switch to C for each call, SBCL doesn't.
> You really cannot see the difference that makes?
May you elaborate what you mean by "switch to C"?
Emacs VM is running in C already.
How does the decoding of bytecode to C happen?
Please take a look at the source code, it isn't that gnarly...
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 18:54 ` Alfred M. Szmidt
@ 2023-08-20 19:02 ` Eli Zaretskii
2023-08-20 20:11 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-20 19:02 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: yantar92, incal, emacs-devel
> From: "Alfred M. Szmidt" <ams@gnu.org>
> Cc: incal@dataswamp.org, emacs-devel@gnu.org
> Date: Sun, 20 Aug 2023 14:54:43 -0400
>
> > Does not look too bad in terms of the number of instructions. And I do
> > not see any obvious indirection.
> >
> > The Emacs VM will incure a switch to C for each call, SBCL doesn't.
> > You really cannot see the difference that makes?
>
> May you elaborate what you mean by "switch to C"?
> Emacs VM is running in C already.
>
> How does the decoding of bytecode to C happen?
It doesn't. bytecode.c is already written in C.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:02 ` Eli Zaretskii
@ 2023-08-20 20:11 ` Alfred M. Szmidt
2023-08-23 21:09 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 20:11 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: yantar92, incal, emacs-devel
> From: "Alfred M. Szmidt" <ams@gnu.org>
> Cc: incal@dataswamp.org, emacs-devel@gnu.org
> Date: Sun, 20 Aug 2023 14:54:43 -0400
>
> > Does not look too bad in terms of the number of instructions. And I do
> > not see any obvious indirection.
> >
> > The Emacs VM will incure a switch to C for each call, SBCL doesn't.
> > You really cannot see the difference that makes?
>
> May you elaborate what you mean by "switch to C"?
> Emacs VM is running in C already.
>
> How does the decoding of bytecode to C happen?
It doesn't. bytecode.c is already written in C.
Tardy wording on my side, Emacs has to loop over the byte code to
execute it, and then maybe call C or Lisp depending. While SBCL will
compile everything to whatever the target architecture is -- so no
bytecode is involved, and that (small) indirection is avoided since it
all becomes a normal function call.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 20:11 ` Alfred M. Szmidt
@ 2023-08-23 21:09 ` Emanuel Berg
2023-08-26 2:01 ` Richard Stallman
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-23 21:09 UTC (permalink / raw)
To: emacs-devel
Alfred M. Szmidt wrote:
> While SBCL will compile everything to whatever the target
> architecture is -- so no bytecode is involved, and that
> (small) indirection is avoided since it all becomes a normal
> function call.
This sounds like a good explanation since it is a general
explanation on the level of different models, not individual
optimizations implemented explicitely for certain algorithms
like we saw with the Elisp vs CL versions of Fibonacci.
Bytecode is slower since more instructions are carried out
compared to no bytecode and only machine instructions to do
the job.
And if the advantage with virtual machines and bytecode is
portability, it brings us back to the initial "SBCL isn't
portable" at the other end of the spectrum.
Is native compilation of Elisp not fully able to bridge
that gap?
PS. I agree native compilation should be encouraged for
everyone, as it makes the interactive feel of Emacs much
faster. This includes general use, so it isn't just
a matter of executing heavy computation if anyone was
under that impression - but that is faster as well,
of course.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-23 21:09 ` Emanuel Berg
@ 2023-08-26 2:01 ` Richard Stallman
2023-08-26 5:48 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Richard Stallman @ 2023-08-26 2:01 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
All else being equal, it is useful to speed up Emacs Lisp execution,
but that should not take priority over other desirable goals.
Bytecode has many advantages for Emacs. It is portable, it is simple,
it is fast to generate, and it doesn't break.
For most purposes, this is more important than execution speed. Most
Emacs commands' speed is limited in practice by the speed of the
user's typing. The few exceptions, in my usage, are limited by
the speed of searching large buffers.
--
Dr Richard Stallman (https://stallman.org)
Chief GNUisance of the GNU Project (https://gnu.org)
Founder, Free Software Foundation (https://fsf.org)
Internet Hall-of-Famer (https://internethalloffame.org)
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-26 2:01 ` Richard Stallman
@ 2023-08-26 5:48 ` Eli Zaretskii
2023-08-26 18:15 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-26 5:48 UTC (permalink / raw)
To: rms; +Cc: incal, emacs-devel
> From: Richard Stallman <rms@gnu.org>
> Cc: emacs-devel@gnu.org
> Date: Fri, 25 Aug 2023 22:01:59 -0400
>
> Most Emacs commands' speed is limited in practice by the speed of
> the user's typing. The few exceptions, in my usage, are limited by
> the speed of searching large buffers.
The above is correct, but is nowadays incomplete. Here are some
relevant observations:
. There are quite a few Lisp programs that run off post-command-hook
or by timers. Those can slow down Emacs even while the user types,
and even though the commands invoked by the actual keys the user
types are themselves fast.
. Searching large buffers is quite fast, but processing on top of
that might not be. There are nowadays many features that work via
various hooks and Lisp invocations from C (example: syntax search
and analysis), and those can completely shadow the (usually small)
cost of searching itself.
. Some commands, such as byte-compile etc., perform significant
processing in Lisp that can be slow. Some features, such as shr.el
(which is used in commands that render HTML) also perform
significant processing in Lisp.
For these and other reasons, an Emacs with native-compilation feels
tangibly faster than Emacs without native-compilation, and IMO that
justifies the known downsides: the fact that native compilation is
slower than byte compilation, and the compiled files are non-portable.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-26 5:48 ` Eli Zaretskii
@ 2023-08-26 18:15 ` Emanuel Berg
2023-08-26 18:27 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-26 18:15 UTC (permalink / raw)
To: emacs-devel
Eli Zaretskii wrote:
>> Most Emacs commands' speed is limited in practice by the
>> speed of the user's typing. The few exceptions, in my
>> usage, are limited by the speed of searching large buffers.
>
> [...] For these and other reasons, an Emacs with
> native-compilation feels tangibly faster than Emacs without
> native-compilation
Absolutely, huge number-crunching computing will be faster,
but also everyday life inside Emacs, you notice
this immediately.
So people should not be afraid, they should be encouraged to
try it.
If the popup buffers with tons of warnings and one-the-fly
nature of the compilation itself a factor that might scare
people away, we should work against that as well. For example
all [M]ELPA package maintainer should be encouraged to clean
their code and get away the warnings, to not take them
lightly.
And maybe add an option to not have native compilation popup
buffers at all, so it would be completely silent.
> and IMO that justifies the known downsides: the fact that
> native compilation is slower than byte compilation
But stuff that happens only once doesn't have to be fast.
And one can think of a scenario where all Elisp is natively
compiled once and for all, like I did here:
https://dataswamp.org/~incal/emacs-init/native.el
This will populate the eln-cache with some stuff that in
practice is never used for a particular user, since no single
individual uses all of Emacs, but my record is still just 1549
files so searching the cache should still be fast.
> and the compiled files are non-portable
The files are not but the feature is so unless one runs some
really exotic processor one can still grab the Elisp files and
native compile them on the local machine.
BTW aren't the eln files portable for use within the same
architecture family? But even if they are, I don't see why
anyone would use (share) them like that when native
compilation is so easy to do locally.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:36 ` Ihor Radchenko
2023-08-20 15:45 ` Eli Zaretskii
2023-08-20 16:03 ` Alfred M. Szmidt
@ 2023-08-20 19:14 ` Eli Zaretskii
2023-08-20 19:44 ` Ihor Radchenko
2 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-20 19:14 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Emanuel Berg <incal@dataswamp.org>, emacs-devel@gnu.org
> Date: Sun, 20 Aug 2023 15:36:34 +0000
>
> I am asking genuinely because `car' (1) has dedicated opt code and thus
> should be one of the best-optimized function calls on Elisp side; (2)
> Fcar is nothing but
>
> /* Take the car or cdr of something whose type is not known. */
> INLINE Lisp_Object
> CAR (Lisp_Object c)
> {
> if (CONSP (c))
> return XCAR (c); // <- XCONS (c)->u.s.car
> if (!NILP (c))
> wrong_type_argument (Qlistp, c);
> return Qnil;
> }
'car' does have a dedicated bytecode op-code, but that op-code simply
calls XCAR, exactly like Fcar and CAR above do:
CASE (Bcar):
if (CONSP (TOP))
TOP = XCAR (TOP);
else if (!NILP (TOP))
wrong_type_argument (Qlistp, TOP);
NEXT;
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:14 ` Eli Zaretskii
@ 2023-08-20 19:44 ` Ihor Radchenko
2023-08-20 20:11 ` Alfred M. Szmidt
2023-08-21 2:35 ` Eli Zaretskii
0 siblings, 2 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-20 19:44 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: ams, incal, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
> 'car' does have a dedicated bytecode op-code, but that op-code simply
> calls XCAR, exactly like Fcar and CAR above do:
Then, I conclude that the example with CL version of `car' is actually
not worse in Elisp:
>> It should be quite obvious why SBCL is faster than the Emacs Lisp VM
>> (or even native). Just look at this call to (car "foo"), and compare
>> what happens in Emacs.
>>
>> * (disassemble 'foo)
>> ; disassembly for FOO
>> ; Size: 166 bytes. Origin: #x225D873F ; FOO
>> ...
In fact, the SBCL version looks more complex.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:44 ` Ihor Radchenko
@ 2023-08-20 20:11 ` Alfred M. Szmidt
2023-08-21 2:35 ` Eli Zaretskii
1 sibling, 0 replies; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-20 20:11 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: eliz, incal, emacs-devel
> 'car' does have a dedicated bytecode op-code, but that op-code simply
> calls XCAR, exactly like Fcar and CAR above do:
Then, I conclude that the example with CL version of `car' is actually
not worse in Elisp:
Then you conclude it wrong, you're comparing a full lambda with body
and what not.
Given the basic lack of compiler theory I'm sorta going to leave this
discussion for now.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 19:44 ` Ihor Radchenko
2023-08-20 20:11 ` Alfred M. Szmidt
@ 2023-08-21 2:35 ` Eli Zaretskii
2023-08-21 8:48 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 2:35 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: ams@gnu.org, incal@dataswamp.org, emacs-devel@gnu.org
> Date: Sun, 20 Aug 2023 19:44:20 +0000
>
> Eli Zaretskii <eliz@gnu.org> writes:
>
> > 'car' does have a dedicated bytecode op-code, but that op-code simply
> > calls XCAR, exactly like Fcar and CAR above do:
>
> Then, I conclude that the example with CL version of `car' is actually
> not worse in Elisp:
I think you forget the price of running the interpreter. After
computing the value of 'car', the code must use it, and that's where
the difference comes from. Look at bytecode.c, from which I quoted a
tiny fragment, to see what Emacs does with the results of each
op-code. (It's actually what every byte-code machine out there does.)
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 2:35 ` Eli Zaretskii
@ 2023-08-21 8:48 ` Ihor Radchenko
2023-08-21 11:10 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 8:48 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: ams, incal, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
>> > 'car' does have a dedicated bytecode op-code, but that op-code simply
>> > calls XCAR, exactly like Fcar and CAR above do:
>>
>> Then, I conclude that the example with CL version of `car' is actually
>> not worse in Elisp:
>
> I think you forget the price of running the interpreter. After
> computing the value of 'car', the code must use it, and that's where
> the difference comes from. Look at bytecode.c, from which I quoted a
> tiny fragment, to see what Emacs does with the results of each
> op-code. (It's actually what every byte-code machine out there does.)
Do I understand correctly that the extra staff that has to be done by
the byte-code machine is register manipulation? If so, the assembly will
probably look similar - all these extra `mov's we see in the CL version
will also be needed in Elisp to manipulate the return value of the `car'
call.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 8:48 ` Ihor Radchenko
@ 2023-08-21 11:10 ` Eli Zaretskii
2023-08-21 11:59 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 11:10 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: ams@gnu.org, incal@dataswamp.org, emacs-devel@gnu.org
> Date: Mon, 21 Aug 2023 08:48:55 +0000
>
> >> Then, I conclude that the example with CL version of `car' is actually
> >> not worse in Elisp:
> >
> > I think you forget the price of running the interpreter. After
> > computing the value of 'car', the code must use it, and that's where
> > the difference comes from. Look at bytecode.c, from which I quoted a
> > tiny fragment, to see what Emacs does with the results of each
> > op-code. (It's actually what every byte-code machine out there does.)
>
> Do I understand correctly that the extra staff that has to be done by
> the byte-code machine is register manipulation?
Which registers do you have in mind here?
bytecode.c implements a stack-based machine, see the comments and
"ASCII-art" picture around line 340 in bytecode.c. Then study the
macros used in bytecode.c, like TOP, PUSH, etc., and you will see what
I mean.
> If so, the assembly will probably look similar
I don't think so. You can compare the GDB disassembly with the
results of byte-code disassembly (the "M-x disassemble" command in
Emacs), and I'm quite sure you will see the results are very
different.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 11:10 ` Eli Zaretskii
@ 2023-08-21 11:59 ` Ihor Radchenko
2023-08-21 12:23 ` Eli Zaretskii
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 11:59 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: ams, incal, emacs-devel
Eli Zaretskii <eliz@gnu.org> writes:
>> If so, the assembly will probably look similar
>
> I don't think so. You can compare the GDB disassembly with the
> results of byte-code disassembly (the "M-x disassemble" command in
> Emacs), and I'm quite sure you will see the results are very
> different.
May you show how to get this GDB disassembly for a toy function?
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 11:59 ` Ihor Radchenko
@ 2023-08-21 12:23 ` Eli Zaretskii
2023-08-23 10:13 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 12:23 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: ams, incal, emacs-devel
> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: ams@gnu.org, incal@dataswamp.org, emacs-devel@gnu.org
> Date: Mon, 21 Aug 2023 11:59:42 +0000
>
> Eli Zaretskii <eliz@gnu.org> writes:
>
> >> If so, the assembly will probably look similar
> >
> > I don't think so. You can compare the GDB disassembly with the
> > results of byte-code disassembly (the "M-x disassemble" command in
> > Emacs), and I'm quite sure you will see the results are very
> > different.
>
> May you show how to get this GDB disassembly for a toy function?
You already did that, at least twice, in this discussion. So what
should I show you that you don't already know?
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 15:22 ` Alfred M. Szmidt
2023-08-20 15:36 ` Ihor Radchenko
@ 2023-08-20 20:32 ` Emanuel Berg
2023-08-21 6:19 ` Alfred M. Szmidt
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-20 20:32 UTC (permalink / raw)
To: emacs-devel
Alfred M. Szmidt wrote:
> Please keep the CC intact, not everyone subscribed.
Yes, that is some Gnus configuration "ghost" I must have done,
several people have pointed it out, but I have been unable to
locate it - so far.
>>> If we talk about type checking, Elisp uses dynamic typing
>>> and compilation cannot do much about it.
>>> Native compilation also does not touch C subroutines - the
>>> place where typechecks are performed.
>>
>> SBCL implements a Lisp, Lisp by definition is
>> dynamically typed.
>
> Only for the kind of use (code) that we are used to.
> See this:
>
> https://medium.com/@MartinCracauer/static-type-checking-in-the-programmable-programming-language-lisp-79bb79eb068a
>
> This has literally nothing to do with the difference between
> static typing, and dynamic typing.
They are checked at compile time and with declare one can
influence execution based on that. It doesn't say subsequent
execution won't have to check types, for that one would need
complete inference like they have in SML.
That would be really nice to have BTW, are you saying that
isn't possible with Lisp? Why not?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-20 20:32 ` Emanuel Berg
@ 2023-08-21 6:19 ` Alfred M. Szmidt
2023-08-21 6:26 ` Ihor Radchenko
2023-08-22 23:55 ` Emanuel Berg
0 siblings, 2 replies; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 6:19 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
> Please keep the CC intact, not everyone subscribed.
Yes, that is some Gnus configuration "ghost" I must have done,
several people have pointed it out, but I have been unable to
locate it - so far.
I suppose that is also why I get a bounce from you?
> Hi!
>
> This is the MAILER-DAEMON, please DO NOT REPLY to this email.
>
> An error has occurred while attempting to deliver a message for
> the following list of recipients:
>
> incal@dataswamp.org: "stdin: 0 messages processed in 0.001 seconds"
Adding it here, since it seems impossible to send messages to your
swamp.
>>> If we talk about type checking, Elisp uses dynamic typing
>>> and compilation cannot do much about it.
>>> Native compilation also does not touch C subroutines - the
>>> place where typechecks are performed.
>>
>> SBCL implements a Lisp, Lisp by definition is
>> dynamically typed.
>
> Only for the kind of use (code) that we are used to.
> See this:
>
> https://medium.com/@MartinCracauer/static-type-checking-in-the-programmable-programming-language-lisp-79bb79eb068a
>
> This has literally nothing to do with the difference between
> static typing, and dynamic typing.
They are checked at compile time and with declare one can
influence execution based on that. It doesn't say subsequent
execution won't have to check types, for that one would need
complete inference like they have in SML.
That is not the meaning of dynamically or statically typed.
Statically typed languages you know every type at compile time, in a
dynamically typed language you have no clue about it either at compile
time or run time. The article in question claims that Lisp is
statically typed, which is a total misunderstanding of the term.
That would be really nice to have BTW, are you saying that
isn't possible with Lisp? Why not?
Because you literally always have to check the type, even with a
declare/declaim you are allowed to pass garbage. The compiler can
produce a warning, but it isn't an error in Lisp, while it is an error
in a statically typed language already at compile time. In Lisp you
never know the types of things.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 6:19 ` Alfred M. Szmidt
@ 2023-08-21 6:26 ` Ihor Radchenko
2023-08-21 7:21 ` Alfred M. Szmidt
2023-08-22 23:55 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 6:26 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: Emanuel Berg, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> ... In Lisp you
> never know the types of things.
This is not true.
For example, when the compiler sees (list ...) call, it is guaranteed to
return a list. The same idea can be applied to a number of other
core functions. Return type of other functions may then (in many cases)
be derived by the compiler.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 6:26 ` Ihor Radchenko
@ 2023-08-21 7:21 ` Alfred M. Szmidt
2023-08-21 7:25 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 7:21 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: incal, emacs-devel
> ... In Lisp you
> never know the types of things.
This is not true.
It is absolutley true, you cannot know what the value of a variable is
without checking the type of it at run time. Variables have no type
information.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 7:21 ` Alfred M. Szmidt
@ 2023-08-21 7:25 ` Ihor Radchenko
2023-08-21 7:52 ` Alfred M. Szmidt
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 7:25 UTC (permalink / raw)
To: Alfred M. Szmidt; +Cc: incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> > ... In Lisp you
> > never know the types of things.
>
> This is not true.
>
> It is absolutley true, you cannot know what the value of a variable is
> without checking the type of it at run time. Variables have no type
> information.
What about
(progn
(setq x (list 'a 'b 'c))
(listp x))
`listp' call can definitely be optimized once the compiler knows that
`list' returns a list.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 7:25 ` Ihor Radchenko
@ 2023-08-21 7:52 ` Alfred M. Szmidt
2023-08-21 11:26 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-21 7:52 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: incal, emacs-devel
> > ... In Lisp you
> > never know the types of things.
>
> This is not true.
>
> It is absolutley true, you cannot know what the value of a variable is
> without checking the type of it at run time. Variables have no type
> information.
What about
(progn
(setq x (list 'a 'b 'c))
(listp x))
`listp' call can definitely be optimized once the compiler knows that
`list' returns a list.
A sufficiently smart compiler would optimize that to T sure, the Emacs
Lisp compiler doesn't. And that is one of the issues, native
compilation or not, since right now native compilation gets this to
work with, and it cannot do magic no matter how much magic dust you
give it.
byte code for foo:
args: nil
0 constant a
1 constant b
2 constant c
3 list3
4 dup
5 varset x
6 listp
7 return
Native compilation will not help you when you need to figure out what
is to be done in:
(defun foo (x) (assoc x '((123 . a) (456 . b) (798 c))))
E.g., to call ASSQ instead, since it is just fixnums, and ASSOC uses
EQUAL which is "slow". The Emacs Lisp compiler cannot optimize that
code today, and native compilation will get whatever the compiler
produced, with the call to ASSOC no matter what.
That is what it means to optimize Lisp code.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 7:52 ` Alfred M. Szmidt
@ 2023-08-21 11:26 ` Ihor Radchenko
0 siblings, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 11:26 UTC (permalink / raw)
To: Alfred M. Szmidt, Andrea Corallo; +Cc: incal, emacs-devel
"Alfred M. Szmidt" <ams@gnu.org> writes:
> (progn
> (setq x (list 'a 'b 'c))
> (listp x))
>
> `listp' call can definitely be optimized once the compiler knows that
> `list' returns a list.
>
> A sufficiently smart compiler would optimize that to T sure, the Emacs
> Lisp compiler doesn't. And that is one of the issues, native
> compilation or not, since right now native compilation gets this to
> work with, and it cannot do magic no matter how much magic dust you
> give it.
>
> byte code for foo:
> args: nil
> 0 constant a
> 1 constant b
> 2 constant c
> 3 list3
> 4 dup
> 5 varset x
> 6 listp
> 7 return
AFAIK, native compilation should be able to optimize the above byte
code. At least, that's what I thought. But looking at disassembly, it
might actually be not the case.
Oh well. I am clearly missing something about how things work.
(defun test0 ()
"Return value")
(defun test1 ()
(let ((x (list 'a 'b 'c)))
(when (listp x) "Return value")))
(disassemble (byte-compile #'test0))
byte code:
doc: Return value
args: nil
0 constant "Return value"
1 return
(native-compile #'test0 "/tmp/test0.eln")
(disassemble #'test0)
0000000000001100 <F7465737430_test0_0>:
1100: 48 8b 05 c1 2e 00 00 mov 0x2ec1(%rip),%rax # 3fc8 <d_reloc@@Base-0x218>
1107: 48 8b 00 mov (%rax),%rax
110a: c3 ret
110b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
Now, test1
(disassemble (byte-compile #'test1))
byte code:
args: nil
0 constant a
1 constant b
2 constant c
3 list3
4 dup
5 varbind x
6 listp
7 goto-if-nil-else-pop 1
10 constant "Return value"
11:1 unbind 1
12 return
(native-compile #'test1 "/tmp/test1.eln")
(disassemble #'test1)
0000000000001100 <F7465737431_test1_0>:
1100: 48 8b 05 c9 2e 00 00 mov 0x2ec9(%rip),%rax # 3fd0 <freloc_link_table@@Base-0x268>
1107: 41 54 push %r12
1109: 31 f6 xor %esi,%esi
110b: 55 push %rbp
110c: 4c 8b 25 b5 2e 00 00 mov 0x2eb5(%rip),%r12 # 3fc8 <d_reloc@@Base-0x218>
1113: 53 push %rbx
1114: 48 8b 18 mov (%rax),%rbx
1117: 49 8b 7c 24 10 mov 0x10(%r12),%rdi
111c: ff 93 d0 20 00 00 call *0x20d0(%rbx)
1122: 49 8b 7c 24 08 mov 0x8(%r12),%rdi
1127: 48 89 c6 mov %rax,%rsi
112a: ff 93 d0 20 00 00 call *0x20d0(%rbx)
1130: 49 8b 3c 24 mov (%r12),%rdi
1134: 48 89 c6 mov %rax,%rsi
1137: ff 93 d0 20 00 00 call *0x20d0(%rbx)
113d: 49 8b 7c 24 20 mov 0x20(%r12),%rdi
1142: 48 89 c5 mov %rax,%rbp
1145: 48 89 c6 mov %rax,%rsi
1148: ff 53 58 call *0x58(%rbx)
114b: 48 89 ef mov %rbp,%rdi
114e: ff 93 30 29 00 00 call *0x2930(%rbx)
1154: 48 85 c0 test %rax,%rax
1157: 74 17 je 1170 <F7465737431_test1_0+0x70>
1159: 49 8b 6c 24 30 mov 0x30(%r12),%rbp
115e: bf 06 00 00 00 mov $0x6,%edi
1163: ff 53 28 call *0x28(%rbx)
1166: 5b pop %rbx
1167: 48 89 e8 mov %rbp,%rax
116a: 5d pop %rbp
116b: 41 5c pop %r12
116d: c3 ret
116e: 66 90 xchg %ax,%ax
1170: 48 89 c5 mov %rax,%rbp
1173: bf 06 00 00 00 mov $0x6,%edi
1178: ff 53 28 call *0x28(%rbx)
117b: 48 89 e8 mov %rbp,%rax
117e: 5b pop %rbx
117f: 5d pop %rbp
1180: 41 5c pop %r12
1182: c3 ret
1183: 66 66 2e 0f 1f 84 00 data16 cs nopw 0x0(%rax,%rax,1)
118a: 00 00 00 00
118e: 66 90 xchg %ax,%ax
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-21 6:19 ` Alfred M. Szmidt
2023-08-21 6:26 ` Ihor Radchenko
@ 2023-08-22 23:55 ` Emanuel Berg
2023-08-23 7:04 ` Alfred M. Szmidt
1 sibling, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-22 23:55 UTC (permalink / raw)
To: emacs-devel
Alfred M. Szmidt wrote:
>>> Please keep the CC intact, not everyone subscribed.
>>
>> Yes, that is some Gnus configuration "ghost" I must have
>> done, several people have pointed it out, but I have been
>> unable to locate it - so far.
>
> I suppose that is also why I get a bounce from you?
No, that is something else, and only happens on MLs.
In ~/.fdm.conf I have a line like this
match case "emacs-devel@gnu.org" in headers action "drop"
I put it there since I read those messages with Gnus and
Gmane, thus can dispose of the e-mails copies saying the same
thing - maybe that is what is causing the failure delivery
messages. Let's comment it out then and see if it helps.
> That is not the meaning of dynamically or statically typed.
> Statically typed languages you know every type at compile
> time, in a dynamically typed language you have no clue about
> it either at compile time or run time. The article in
> question claims that Lisp is statically typed, which is
> a total misunderstanding of the term.
>
>> That would be really nice to have BTW, are you saying that
>> isn't possible with Lisp? Why not?
>
> Because you literally always have to check the type, even
> with a declare/declaim you are allowed to pass garbage.
> The compiler can produce a warning, but it isn't an error in
> Lisp, while it is an error in a statically typed language
> already at compile time. In Lisp you never know the types
> of things.
In C types are explicit in the source and in SML the types of
everything, including functions and by extention combinations
of functions, can be inferred, so that the type of a function
to add two integers is described as a mapping from integer,
integer to another integer.
Those methods cannot be used to get a statically typed Lisp?
Why not? Because of the Lisp syntax?
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-22 23:55 ` Emanuel Berg
@ 2023-08-23 7:04 ` Alfred M. Szmidt
2023-08-23 17:24 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Alfred M. Szmidt @ 2023-08-23 7:04 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Those methods cannot be used to get a statically typed Lisp?
Why not? Because of the Lisp syntax?
(let ((x 123)
(setq x "foo")
(setq x current-buffer))
You cannot disallow the above, or prohibit it -- you'd get a entierly
different language that isn't Lisp. You can tell the compiler that
"yes, I promise that X is a fixnum", but in a statically typed error
that is an hard errror.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-23 7:04 ` Alfred M. Szmidt
@ 2023-08-23 17:24 ` Emanuel Berg
2023-08-24 20:02 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-23 17:24 UTC (permalink / raw)
To: emacs-devel
Alfred M. Szmidt wrote:
>> Those methods cannot be used to get a statically typed
>> Lisp? Why not? Because of the Lisp syntax?
>
> (let ((x 123)
> (setq x "foo")
> (setq x current-buffer))
>
> You cannot disallow the above, or prohibit it -- you'd get
> a entierly different language that isn't Lisp.
It would be different, but I don't know if it would be
entirely different necessarily. It would be interesting to
try anyway.
But anyway, commenting out that ~/.fdm.conf instruction do
drop mails from my INBOX that also appear as posts on Gmane,
I now received this mail as a mail copy as well as on Gmane,
from where I type this. Did you still get the non-delivery
e-mail? If not, at least one mystery less to solve.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-23 17:24 ` Emanuel Berg
@ 2023-08-24 20:02 ` Emanuel Berg
0 siblings, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-24 20:02 UTC (permalink / raw)
To: emacs-devel
> But anyway, commenting out that ~/.fdm.conf instruction do
> drop mails from my INBOX that also appear as posts on Gmane,
> I now received this mail as a mail copy as well as on Gmane,
> from where I type this. Did you still get the non-delivery
> e-mail? If not, at least one mystery less to solve.
Yes, for completion, it was these lines in ~/.fdm.conf that
caused the error, so instead of dropping the mails like I did
I should probably pipe them to some directory where I don't
see them.
Don't try this at home!
# MLs
match case "debian-user@lists.debian.org" in headers action "drop"
match case "emacs-devel@gnu.org" in headers action "drop"
match case "emacs-tangents@gnu.org" in headers action "drop"
match case "gmane-test@quimby.gnus.org" in headers action "drop"
match case "help-gnu-emacs@gnu.org" in headers action "drop"
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* RE: [External] : Re: Shrinking the C core
2023-08-20 6:51 ` Emanuel Berg
2023-08-20 7:14 ` Ihor Radchenko
@ 2023-08-20 21:51 ` Drew Adams
2023-08-21 8:54 ` Type declarations in Elisp (was: [External] : Re: Shrinking the C core) Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Drew Adams @ 2023-08-20 21:51 UTC (permalink / raw)
To: Emanuel Berg, emacs-devel@gnu.org
> > The discussion about floor started from Alfred using `floor'
> > as an example where CL uses system-dependent optimizations
> > and thus being much faster.
>
> So the answer to the question, Why is SBCL faster?
> is "optimizations". And the answer to the question, Why don't
> we have those optimizations? is "they are not portable"?
https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node103.html#SECTION001300000000000000000
Common Lisp is a standard. Different implementations
of it should respect the standard, but the standard
allows for different behaviors to some extent, esp.
wrt performance. CL has multiple ways of declaring
different levels of optimization, which a given
implementation can support or not.
Particular optimizations are not expected to be
portable.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Type declarations in Elisp (was: [External] : Re: Shrinking the C core)
2023-08-20 21:51 ` [External] : " Drew Adams
@ 2023-08-21 8:54 ` Ihor Radchenko
2023-08-21 9:30 ` Gerd Möllmann
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 8:54 UTC (permalink / raw)
To: Drew Adams; +Cc: Emanuel Berg, emacs-devel@gnu.org
Drew Adams <drew.adams@oracle.com> writes:
>> So the answer to the question, Why is SBCL faster?
>> is "optimizations". And the answer to the question, Why don't
>> we have those optimizations? is "they are not portable"?
>
> https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node103.html#SECTION001300000000000000000
>
> Common Lisp is a standard. Different implementations
> of it should respect the standard, but the standard
> allows for different behaviors to some extent, esp.
> wrt performance. CL has multiple ways of declaring
> different levels of optimization, which a given
> implementation can support or not.
I am wondering if type, ftype, and inline declarations could be added to
Elisp. Native compilation already uses a fixed set of ftype
declarations, but it cannot be modified and cannot be declared
per-defun.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Type declarations in Elisp (was: [External] : Re: Shrinking the C core)
2023-08-21 8:54 ` Type declarations in Elisp (was: [External] : Re: Shrinking the C core) Ihor Radchenko
@ 2023-08-21 9:30 ` Gerd Möllmann
2023-08-21 11:13 ` Type declarations in Elisp Eli Zaretskii
2023-08-21 11:37 ` Type declarations in Elisp (was: [External] : Re: Shrinking the C core) Ihor Radchenko
0 siblings, 2 replies; 247+ messages in thread
From: Gerd Möllmann @ 2023-08-21 9:30 UTC (permalink / raw)
To: yantar92; +Cc: drew.adams, emacs-devel, incal
> I am wondering if type, ftype, and inline declarations could be added to
> Elisp. Native compilation already uses a fixed set of ftype
> declarations, but it cannot be modified and cannot be declared
> per-defun.
I'd rather see some profile runs first that show where in some typical
(tm) native-compiled ELisp program how much time is spent.
I personally was actually suprised by how much native compilation makes
Emacs feel fast, because my gut feeling was that ELisp programs spend
most of their time in C anyway. Everything having to with buffer-text
manipulation, searches, text-properties, the list goes on... Even
buffer-local bindings, coming to think of it.
Anyway, profiling would be interesting, I think.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-21 9:30 ` Gerd Möllmann
@ 2023-08-21 11:13 ` Eli Zaretskii
2023-08-21 11:37 ` Type declarations in Elisp (was: [External] : Re: Shrinking the C core) Ihor Radchenko
1 sibling, 0 replies; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-21 11:13 UTC (permalink / raw)
To: Gerd Möllmann; +Cc: yantar92, drew.adams, emacs-devel, incal
> Date: Mon, 21 Aug 2023 11:30:45 +0200
> Cc: drew.adams@oracle.com, emacs-devel@gnu.org, incal@dataswamp.org
> From: Gerd Möllmann <gerd.moellmann@gmail.com>
>
> > I am wondering if type, ftype, and inline declarations could be added to
> > Elisp. Native compilation already uses a fixed set of ftype
> > declarations, but it cannot be modified and cannot be declared
> > per-defun.
>
> I'd rather see some profile runs first that show where in some typical
> (tm) native-compiled ELisp program how much time is spent.
100% agreement.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp (was: [External] : Re: Shrinking the C core)
2023-08-21 9:30 ` Gerd Möllmann
2023-08-21 11:13 ` Type declarations in Elisp Eli Zaretskii
@ 2023-08-21 11:37 ` Ihor Radchenko
2023-08-22 5:34 ` Type declarations in Elisp Gerd Möllmann
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-21 11:37 UTC (permalink / raw)
To: Gerd Möllmann; +Cc: drew.adams, emacs-devel, incal
Gerd Möllmann <gerd.moellmann@gmail.com> writes:
>> I am wondering if type, ftype, and inline declarations could be added to
>> Elisp. Native compilation already uses a fixed set of ftype
>> declarations, but it cannot be modified and cannot be declared
>> per-defun.
>
> I'd rather see some profile runs first that show where in some typical
> (tm) native-compiled ELisp program how much time is spent.
Let me try.
I did
$ perf record make test
$ perf report
| 15.56% | emacs | emacs | [.] | process_mark_stack |
| 9.30% | emacs | emacs | [.] | re_match_2_internal |
| 4.50% | emacs | emacs | [.] | exec_byte_code |
| 2.86% | emacs | emacs | [.] | readchar |
| 2.16% | emacs | emacs | [.] | pdumper_marked_p_impl |
| 2.13% | emacs | emacs | [.] | dump_do_dump_relocation |
| 1.95% | emacs | emacs | [.] | Fmemq |
| 1.88% | emacs | emacs | [.] | mark_char_table |
| 1.55% | emacs | emacs | [.] | read0 |
| 1.51% | emacs | emacs | [.] | oblookup |
| 1.37% | emacs | emacs | [.] | md5_process_block |
| 1.33% | emacs | emacs | [.] | re_search_2 |
| 1.17% | emacs | emacs | [.] | Ffuncall |
| 1.12% | emacs | emacs | [.] | pdumper_set_marked_impl |
| 1.05% | emacs | [unknown] | [k] | 0xffffffffaae01857 |
| 1.03% | emacs | ld-linux-x86-64.so.2 | [.] | 0x00000000000091b8 |
| 0.94% | emacs | emacs | [.] | allocate_vectorlike |
| 0.92% | emacs | emacs | [.] | unbind_to |
| 0.90% | emacs | emacs | [.] | funcall_subr |
| 0.90% | emacs | emacs | [.] | plist_get |
| 0.82% | emacs | emacs | [.] | readbyte_from_stdio |
| 0.61% | emacs | emacs | [.] | Fassq |
| 0.58% | emacs | emacs | [.] | compile_pattern |
| 0.57% | emacs | emacs | [.] | read_string_literal |
| 0.56% | emacs | emacs | [.] | internal_equal |
| 0.56% | emacs | emacs | [.] | funcall_general |
| 0.55% | emacs | ld-linux-x86-64.so.2 | [.] | 0x0000000000009176 |
| 0.53% | emacs | emacs | [.] | Fcons |
| 0.52% | emacs | emacs | [.] | hash_lookup |
| 0.51% | emacs | emacs | [.] | Fget |
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-21 11:37 ` Type declarations in Elisp (was: [External] : Re: Shrinking the C core) Ihor Radchenko
@ 2023-08-22 5:34 ` Gerd Möllmann
2023-08-22 6:16 ` Ihor Radchenko
2023-08-22 11:14 ` Eli Zaretskii
0 siblings, 2 replies; 247+ messages in thread
From: Gerd Möllmann @ 2023-08-22 5:34 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: drew.adams, emacs-devel, incal
Ihor Radchenko <yantar92@posteo.net> writes:
> Gerd Möllmann <gerd.moellmann@gmail.com> writes:
>> I'd rather see some profile runs first that show where in some typical
>> (tm) native-compiled ELisp program how much time is spent.
>
> Let me try.
> I did
>
> $ perf record make test
> $ perf report
Thanks.
I'm afraid I don't see much worth expanding on in the report. And the
question is, of course, how typical (tm) a result from make check is.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-22 5:34 ` Type declarations in Elisp Gerd Möllmann
@ 2023-08-22 6:16 ` Ihor Radchenko
2023-08-22 11:14 ` Eli Zaretskii
1 sibling, 0 replies; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-22 6:16 UTC (permalink / raw)
To: Gerd Möllmann; +Cc: drew.adams, emacs-devel, incal
Gerd Möllmann <gerd.moellmann@gmail.com> writes:
> I'm afraid I don't see much worth expanding on in the report. And the
> question is, of course, how typical (tm) a result from make check is.
I suspect that we cannot agree about this ever. It's like the best
defaults - different for each individual user.
What might be more practical is measuring performance of popular
libraries/packages.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-22 5:34 ` Type declarations in Elisp Gerd Möllmann
2023-08-22 6:16 ` Ihor Radchenko
@ 2023-08-22 11:14 ` Eli Zaretskii
2023-08-22 23:33 ` Emanuel Berg
1 sibling, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-22 11:14 UTC (permalink / raw)
To: Gerd Möllmann; +Cc: yantar92, emacs-devel
> From: Gerd Möllmann <gerd.moellmann@gmail.com>
> Cc: drew.adams@oracle.com, emacs-devel@gnu.org, incal@dataswamp.org
> Date: Tue, 22 Aug 2023 07:34:41 +0200
>
> Ihor Radchenko <yantar92@posteo.net> writes:
>
> > Gerd Möllmann <gerd.moellmann@gmail.com> writes:
> >> I'd rather see some profile runs first that show where in some typical
> >> (tm) native-compiled ELisp program how much time is spent.
> >
> > Let me try.
> > I did
> >
> > $ perf record make test
> > $ perf report
>
> Thanks.
>
> I'm afraid I don't see much worth expanding on in the report. And the
> question is, of course, how typical (tm) a result from make check is.
Yes. Moreover, the profile indicates that the only somewhat
significant consumer of CPU time was process_mark_stack, which is a
function called by GC. So this is completely uninteresting in the
context of looking at hot spots in Lisp code and Lisp interpreter.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-22 11:14 ` Eli Zaretskii
@ 2023-08-22 23:33 ` Emanuel Berg
2023-08-25 9:29 ` Andrea Corallo
2023-08-27 8:42 ` Ihor Radchenko
0 siblings, 2 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-22 23:33 UTC (permalink / raw)
To: emacs-devel
FWIW here are the results before I lost interest in the idea,
for now at least.
I was unable to get `elisp-benchmarks-run' to just run
specific benchmarks, but reading the docstring it claims that
it is possible, so maybe the error was on me.
Also it would be beneficial if it could just be told to just
return the values, now it can only make a table out if it -
granted, it is neat and all.
CC to Andrea Corallo who wrote it, thanks for the package and
good luck working with it in the future, it absolutely has its
place in the Elisp world.
All files:
https://dataswamp.org/~incal/cl/bench/
| test | non-gc avg (s) | gc avg (s) | gcs avg | tot avg (s) | tot avg err (s) |
|----------------+----------------+------------+---------+-------------+-----------------|
| bubble | 0.68 | 0.18 | 1 | 0.86 | 0.00 |
| bubble-no-cons | 1.06 | 0.00 | 0 | 1.06 | 0.00 |
| fibn | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-rec | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-tc | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
|----------------+----------------+------------+---------+-------------+-----------------|
| total | 37.55 | 12.32 | 90 | 49.88 | 0.23 |
bubble
0.659992 s real time
1.882428 s run time
bubble-no-cons
0.807989 s real time
2.421754 s run time
fibn
0.071999 s real time
0.212986 s run time
fibn-rec
0.547993 s real time
1.644687 s run time
fibn-tc
0.407995 s real time
1.22462 s run time
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-22 23:33 ` Emanuel Berg
@ 2023-08-25 9:29 ` Andrea Corallo
2023-08-25 20:42 ` Emanuel Berg
2023-08-27 8:42 ` Ihor Radchenko
1 sibling, 1 reply; 247+ messages in thread
From: Andrea Corallo @ 2023-08-25 9:29 UTC (permalink / raw)
To: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> FWIW here are the results before I lost interest in the idea,
> for now at least.
>
> I was unable to get `elisp-benchmarks-run' to just run
> specific benchmarks, but reading the docstring it claims that
> it is possible, so maybe the error was on me.
Hi!
From the docstring of `elisp-benchmarks-run'.
If non nil SELECTOR is a regexp to match the benchmark names to be executed.
The test is repeated RUNS number of times.
emacs -batch -l ./elisp-benchmarks.el -eval '(elisp-benchmarks-run "bubble")'
This for instance on my system runs only bubble and bubble-no-cons.
> Also it would be beneficial if it could just be told to just
> return the values, now it can only make a table out if it -
> granted, it is neat and all.
The values in which format? PS the table should be org parsable.
Bests
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-25 9:29 ` Andrea Corallo
@ 2023-08-25 20:42 ` Emanuel Berg
0 siblings, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-25 20:42 UTC (permalink / raw)
To: emacs-devel
Andrea Corallo wrote:
>> FWIW here are the results before I lost interest in the
>> idea, for now at least.
>>
>> I was unable to get `elisp-benchmarks-run' to just run
>> specific benchmarks, but reading the docstring it claims
>> that it is possible, so maybe the error was on me.
>
> From the docstring of `elisp-benchmarks-run'.
>
> If non nil SELECTOR is a regexp to match the benchmark
> names to be executed. The test is repeated RUNS number
> of times.
>
> emacs -batch -l ./elisp-benchmarks.el -eval
> '(elisp-benchmarks-run "bubble")'
>
> This for instance on my system runs only bubble and
> bubble-no-cons.
Indeed, that works here as well. What I did was just
evaluating, in this case
(elisp-benchmarks-run "bubble")
i.e., without running it batch-mode. That produces the entire
table, I'm not sure how that happens even but it does say in
the _package_ documentation to run it batch-mode and that
makes more sense, also.
Still I wonder why that happens? :O
>> Also it would be beneficial if it could just be told to
>> just return the values, now it can only make a table out if
>> it - granted, it is neat and all.
>
> The values in which format?
They would be in the same format, what I had in mind was so
you would get for example the above form to evaluate into
(0.80 0.09 1 0.89 0.01) instead of
| test | non-gc avg (s) | gc avg (s) | gcs avg | tot avg (s) | tot avg err (s) |
|----------------+----------------+------------+---------+-------------+-----------------|
| bubble | 0.80 | 0.09 | 1 | 0.89 | 0.01 |
| bubble-no-cons | 1.06 | 0.00 | 0 | 1.06 | 0.00 |
|----------------+----------------+------------+---------+-------------+-----------------|
| total | 1.86 | 0.09 | 1 | 1.95 | 0.01 |
And, while we are at it, maybe the documentation could also
explain these columns. It only has to be one sentence for
each, i.e. non-gc avg (s), gc avg (s), gcs avg, tot avg (s),
and tot avg err (s).
Sorry about giving you more work but these should be quick
fixes :)
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-22 23:33 ` Emanuel Berg
2023-08-25 9:29 ` Andrea Corallo
@ 2023-08-27 8:42 ` Ihor Radchenko
2023-08-27 14:04 ` Andrea Corallo
1 sibling, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-27 8:42 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
Emanuel Berg <incal@dataswamp.org> writes:
> FWIW here are the results before I lost interest in the idea,
> for now at least.
Do I read correctly that most of the benchmarks run faster (if we
disregard GC) in Elisp compared to CBCL?
Elisp bubble: 0.68 sec vs CBCL 0.66 sec
Elisp bubble-no-cons: 1.06 sec vs SBCL 0.81 sec
Elisp fibn: 0.00 sec vs. SBCL 0.07 sec
Elisp fibn-rec: 0.00 sec vs. SBCL 0.55 sec
Elisp fibn-tc: 0.00 sec vs. SBCL 0.41 sec
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-27 8:42 ` Ihor Radchenko
@ 2023-08-27 14:04 ` Andrea Corallo
2023-08-27 14:07 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Andrea Corallo @ 2023-08-27 14:04 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Emanuel Berg, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> Emanuel Berg <incal@dataswamp.org> writes:
>
>> FWIW here are the results before I lost interest in the idea,
>> for now at least.
>
> Do I read correctly that most of the benchmarks run faster (if we
> disregard GC) in Elisp compared to CBCL?
>
> Elisp bubble: 0.68 sec vs CBCL 0.66 sec
> Elisp bubble-no-cons: 1.06 sec vs SBCL 0.81 sec
> Elisp fibn: 0.00 sec vs. SBCL 0.07 sec
> Elisp fibn-rec: 0.00 sec vs. SBCL 0.55 sec
> Elisp fibn-tc: 0.00 sec vs. SBCL 0.41 sec
IIRC the issue is that the native compiler at speed 3 completly optmizes
out the thress fibonacci bechmarks, when I wrote/added those u-benchmark
it was not the case but afterward the compiler got smarter.
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-27 14:04 ` Andrea Corallo
@ 2023-08-27 14:07 ` Ihor Radchenko
2023-08-27 15:46 ` Andrea Corallo
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-27 14:07 UTC (permalink / raw)
To: Andrea Corallo; +Cc: Emanuel Berg, emacs-devel
Andrea Corallo <acorallo@gnu.org> writes:
> Ihor Radchenko <yantar92@posteo.net> writes:
>> Elisp fibn: 0.00 sec vs. SBCL 0.07 sec
>> Elisp fibn-rec: 0.00 sec vs. SBCL 0.55 sec
>> Elisp fibn-tc: 0.00 sec vs. SBCL 0.41 sec
>
> IIRC the issue is that the native compiler at speed 3 completly optmizes
> out the thress fibonacci bechmarks, when I wrote/added those u-benchmark
> it was not the case but afterward the compiler got smarter.
I got the same 0.0 numbers using -batch call with default settings,
which should be speed 2.
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-27 14:07 ` Ihor Radchenko
@ 2023-08-27 15:46 ` Andrea Corallo
2023-08-27 17:15 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Andrea Corallo @ 2023-08-27 15:46 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Emanuel Berg, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> Andrea Corallo <acorallo@gnu.org> writes:
>
>> Ihor Radchenko <yantar92@posteo.net> writes:
>>> Elisp fibn: 0.00 sec vs. SBCL 0.07 sec
>>> Elisp fibn-rec: 0.00 sec vs. SBCL 0.55 sec
>>> Elisp fibn-tc: 0.00 sec vs. SBCL 0.41 sec
>>
>> IIRC the issue is that the native compiler at speed 3 completly optmizes
>> out the thress fibonacci bechmarks, when I wrote/added those u-benchmark
>> it was not the case but afterward the compiler got smarter.
>
> I got the same 0.0 numbers using -batch call with default settings,
> which should be speed 2.
Hi Ihor,
I'm not sure abut what you mean with "using -batch call with default
settings", a detailed reproducer would probably make commenting easier.
Thanks
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-27 15:46 ` Andrea Corallo
@ 2023-08-27 17:15 ` Ihor Radchenko
2023-08-27 18:06 ` Andrea Corallo
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-27 17:15 UTC (permalink / raw)
To: Andrea Corallo; +Cc: Emanuel Berg, emacs-devel
Andrea Corallo <acorallo@gnu.org> writes:
>> I got the same 0.0 numbers using -batch call with default settings,
>> which should be speed 2.
>
> I'm not sure abut what you mean with "using -batch call with default
> settings", a detailed reproducer would probably make commenting easier.
emacs -batch -l .../elisp-benchmarks.el -f elisp-benchmarks-run
| fibn | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-rec | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-tc | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
full benchamrk report:
| test | non-gc avg (s) | gc avg (s) | gcs avg | tot avg (s) | tot avg err (s) |
|--------------------+----------------+------------+---------+-------------+-----------------|
| bubble | 0.68 | 0.06 | 1 | 0.73 | 0.05 |
| bubble-no-cons | 1.17 | 0.00 | 0 | 1.17 | 0.07 |
| bytecomp | 1.64 | 0.32 | 13 | 1.95 | 0.03 |
| dhrystone | 2.13 | 0.00 | 0 | 2.13 | 0.02 |
| eieio | 1.19 | 0.13 | 7 | 1.32 | 0.04 |
| fibn | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-named-let | 1.47 | 0.00 | 0 | 1.47 | 0.04 |
| fibn-rec | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| fibn-tc | 0.00 | 0.00 | 0 | 0.00 | 0.00 |
| flet | 1.41 | 0.00 | 0 | 1.41 | 0.03 |
| inclist | 0.84 | 0.00 | 0 | 0.84 | 0.03 |
| inclist-type-hints | 0.76 | 0.00 | 0 | 0.76 | 0.00 |
| listlen-tc | 0.12 | 0.00 | 0 | 0.12 | 0.01 |
| map-closure | 5.25 | 0.00 | 0 | 5.25 | 0.02 |
| nbody | 1.47 | 0.15 | 1 | 1.62 | 0.07 |
| pack-unpack | 0.38 | 0.02 | 1 | 0.40 | 0.00 |
| pack-unpack-old | 1.13 | 0.05 | 3 | 1.19 | 0.03 |
| pcase | 1.77 | 0.00 | 0 | 1.77 | 0.01 |
| pidigits | 5.04 | 0.97 | 17 | 6.00 | 0.06 |
| scroll | 0.58 | 0.00 | 0 | 0.58 | 0.02 |
| smie | 1.47 | 0.05 | 2 | 1.52 | 0.02 |
|--------------------+----------------+------------+---------+-------------+-----------------|
| total | 28.49 | 1.74 | 45 | 30.23 | 0.16 |
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-27 17:15 ` Ihor Radchenko
@ 2023-08-27 18:06 ` Andrea Corallo
2023-08-28 9:56 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Andrea Corallo @ 2023-08-27 18:06 UTC (permalink / raw)
To: Ihor Radchenko; +Cc: Emanuel Berg, emacs-devel
Ihor Radchenko <yantar92@posteo.net> writes:
> Andrea Corallo <acorallo@gnu.org> writes:
>
>>> I got the same 0.0 numbers using -batch call with default settings,
>>> which should be speed 2.
>>
>> I'm not sure abut what you mean with "using -batch call with default
>> settings", a detailed reproducer would probably make commenting easier.
>
> emacs -batch -l .../elisp-benchmarks.el -f elisp-benchmarks-run
Mmmh but AFAIR elisp-benchmarks always native compiles at speed 3.
Bests
Andrea
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Type declarations in Elisp
2023-08-27 18:06 ` Andrea Corallo
@ 2023-08-28 9:56 ` Ihor Radchenko
2023-08-28 19:06 ` Emanuel Berg
0 siblings, 1 reply; 247+ messages in thread
From: Ihor Radchenko @ 2023-08-28 9:56 UTC (permalink / raw)
To: Andrea Corallo; +Cc: Emanuel Berg, emacs-devel
Andrea Corallo <acorallo@gnu.org> writes:
>> emacs -batch -l .../elisp-benchmarks.el -f elisp-benchmarks-run
>
> Mmmh but AFAIR elisp-benchmarks always native compiles at speed 3.
It is... a questionable default.
With speed 2, I get
| fibn | 1.03 | 0.00 | 0 | 1.03 | 0.07 |
| fibn-named-let | 1.71 | 0.00 | 0 | 1.71 | 0.08 |
| fibn-rec | 3.71 | 0.00 | 0 | 3.71 | 0.13 |
| fibn-tc | 4.44 | 0.00 | 0 | 4.44 | 0.04 |
Elisp bubble: 0.66 sec vs CBCL 0.66 sec
Elisp bubble-no-cons: 1.11 sec vs SBCL 0.81 sec
Elisp fibn: 1.03 sec vs. SBCL 0.07 sec
Elisp fibn-rec: 3.71 sec vs. SBCL 0.55 sec
Elisp fibn-tc: 4.44 sec vs. SBCL 0.41 sec
perf on all the fibn tests shows that recursion is still going through
`funcall' - most of the time is spent in Ffuncall, funcall_general, and
funcall_subr. Runtime typechecking (check_number_coerce_marker) is also
taking quite a bit of time.
arithcompare is strange. AFAIU, comparisons in the benchmarks are
(= n 0)/(= n 1). Why are they so costly?
20.25% emacs emacs [.] Ffuncall
12.70% emacs emacs [.] arith_driver
9.59% emacs emacs [.] funcall_general
9.46% emacs emacs [.] funcall_subr
8.70% emacs emacs [.] check_number_coerce_marker
7.61% emacs emacs [.] arithcompare
6.63% emacs emacs [.] arithcompare_driver
6.17% emacs fibn-07298b84-44e7557d.eln [.] F656c622d6669626e2d7463_elb_fibn_tc_0
5.95% emacs emacs [.] Fplus
4.09% emacs fibn-07298b84-44e7557d.eln [.] F656c622d6669626e2d726563_elb_fibn_rec_0
1.98% emacs fibn-07298b84-44e7557d.eln [.] F656c622d6669626e_elb_fibn_0
1.85% emacs emacs [.] Feqlsign
1.60% emacs fibn-07298b84-44e7557d.eln [.] F656c622d6669626e2d6e616d65642d6c6574_elb_fibn_named_let_0
0.98% emacs emacs [.] Flss
0.64% emacs emacs [.] Fminus
--
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-15 22:57 ` Emanuel Berg
2023-08-16 10:27 ` Ihor Radchenko
@ 2023-08-18 8:35 ` Aurélien Aptel
2023-08-19 13:32 ` Emanuel Berg
2023-08-31 1:41 ` Emanuel Berg
1 sibling, 2 replies; 247+ messages in thread
From: Aurélien Aptel @ 2023-08-18 8:35 UTC (permalink / raw)
To: emacs-devel
On Wed, Aug 16, 2023 at 4:22 AM Emanuel Berg <incal@dataswamp.org> wrote:
> Actually, even I have a better random than Elisp:
>
> https://dataswamp.org/~incal/emacs-init/random-urandom/
Your pw_random_number() function is leaking the file descriptor. You
need to close fd each call.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 23:09 ` Emanuel Berg
2023-08-13 5:50 ` tomas
2023-08-13 8:00 ` Andreas Schwab
@ 2023-08-14 2:36 ` Richard Stallman
2023-08-14 4:12 ` Emanuel Berg
2 siblings, 1 reply; 247+ messages in thread
From: Richard Stallman @ 2023-08-14 2:36 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
> Okay, but here it isn't about joining the CL standard, it is
> the situation that we have "the Lisp editor" yet our Lisp is
> much slower than other people's Lisp, and for no good reason
> what I can understand as Emacs is C, and SBCL is C. What's the
> difference, why is one so much faster than the other?
It's useful to investigate why this is so -- if someone finds
a fairly easy way to make Emacs faster, that could be good.
However, major redesign would be more trouble than it is worth.
Instead, please consider all the jobs we have NO good free software to
do. Improving that free software from zero to version 0.1 would be a
far more important contribution to the Free World.
--
Dr Richard Stallman (https://stallman.org)
Chief GNUisance of the GNU Project (https://gnu.org)
Founder, Free Software Foundation (https://fsf.org)
Internet Hall-of-Famer (https://internethalloffame.org)
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-14 2:36 ` Richard Stallman
@ 2023-08-14 4:12 ` Emanuel Berg
2023-08-14 11:15 ` Ihor Radchenko
0 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-14 4:12 UTC (permalink / raw)
To: emacs-devel
Richard Stallman wrote:
> Instead, please consider all the jobs we have NO good free
> software to do. Improving that free software from zero to
> version 0.1 would be a far more important contribution to
> the Free World.
Not necessarily since everyone uses speed - except for
amphetamine addicts, maybe - but very few do their jobs with
a piece of version 0.1 software.
But those trajectories don't contradict each other - on the
contrary actually, as even the 0.1 software use speed and
increased speed makes new such projects feasible as well.
Still, I get your point and would instead like to ask, what
jobs are they more exactly?
If you keep a drawer in the FSF building full of unprogrammed
"version 0.0 software", I'd be happy to take a look!
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 2:46 ` Richard Stallman
2023-08-12 3:22 ` Emanuel Berg
@ 2023-08-12 3:28 ` Christopher Dimech
2023-08-12 3:48 ` Emanuel Berg
` (2 more replies)
1 sibling, 3 replies; 247+ messages in thread
From: Christopher Dimech @ 2023-08-12 3:28 UTC (permalink / raw)
To: rms; +Cc: Dmitry Gutov, esr, luangruo, emacs-devel
> Sent: Saturday, August 12, 2023 at 2:46 PM
> From: "Richard Stallman" <rms@gnu.org>
> To: "Dmitry Gutov" <dmitry@gutov.dev>
> Cc: esr@thyrsus.com, luangruo@yahoo.com, emacs-devel@gnu.org
> Subject: Re: Shrinking the C core
>
> [[[ To any NSA and FBI agents reading my email: please consider ]]]
> [[[ whether defending the US Constitution against all enemies, ]]]
> [[[ foreign or domestic, requires you to follow Snowden's example. ]]]
>
> There are occasiona when it is useful, for added flexibility,
> to move some function from C to Lisp. However, stability
> is an important goal for Emacs, so we should not even try
> to move large amounts of code to Lisp just for the sake
> of moving code to Lisp.
>
> The rate at whic we have added features already causes significant
> instability.
I concur. Have been suggesting to have a basic version of emacs that would be considered
complete and stable, while minimizing the risk of instability due to feature additions.
Every project should have such aim and achieve it. Clearly, the core requirements must
be well defined, and the scope limited in terms of the essential features to fulfill its
primary purpose. Maintaining emacs for it to do everything could halt its continued
development structure. Especially if it cannot be reasonable handled by a few people.
> --
> Dr Richard Stallman (https://stallman.org)
> Chief GNUisance of the GNU Project (https://gnu.org)
> Founder, Free Software Foundation (https://fsf.org)
> Internet Hall-of-Famer (https://internethalloffame.org)
>
>
>
>
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 3:28 ` Christopher Dimech
@ 2023-08-12 3:48 ` Emanuel Berg
2023-08-12 3:50 ` Emanuel Berg
2023-08-12 6:02 ` Eli Zaretskii
2 siblings, 0 replies; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 3:48 UTC (permalink / raw)
To: emacs-devel
Christopher Dimech wrote:
> I concur. Have been suggesting to have a basic version of
> emacs that would be considered complete and stable, while
> minimizing the risk of instability due to feature additions.
We have a basic version, and then packets with more ...
> Every project should have such aim and achieve it. Clearly,
> the core requirements must be well defined, and the scope
> limited in terms of the essential features to fulfill its
> primary purpose. Maintaining emacs for it to do everything
> could halt its continued development structure.
Again, it is modular ...
Old Cathedral half-discontinued algorithm maybe works
_sometimes_ for small and/or very well-defined projects by
nature but Emacs is a world already, too late for that to ever
work, if indeed desired from the get go which is debatable ...
P2P review a nice way of putting it, another way is everyone
do as much as possible about everything and no, overhead
issues with maintenance complexity are a miniature detriment
compared to the immense gains all that software brings.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 3:28 ` Christopher Dimech
2023-08-12 3:48 ` Emanuel Berg
@ 2023-08-12 3:50 ` Emanuel Berg
2023-08-12 6:00 ` Christopher Dimech
2023-08-12 6:02 ` Eli Zaretskii
2 siblings, 1 reply; 247+ messages in thread
From: Emanuel Berg @ 2023-08-12 3:50 UTC (permalink / raw)
To: emacs-devel
Christopher Dimech wrote:
> I concur. Have been suggesting to have a basic version of
> emacs that would be considered complete and stable, while
> minimizing the risk of instability due to feature additions.
No one wants a perfect piece of minimal software developed by
6 experts around the planet, gets old too fast.
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 3:50 ` Emanuel Berg
@ 2023-08-12 6:00 ` Christopher Dimech
0 siblings, 0 replies; 247+ messages in thread
From: Christopher Dimech @ 2023-08-12 6:00 UTC (permalink / raw)
To: Emanuel Berg; +Cc: emacs-devel
> Sent: Saturday, August 12, 2023 at 3:50 PM
> From: "Emanuel Berg" <incal@dataswamp.org>
> To: emacs-devel@gnu.org
> Subject: Re: Shrinking the C core
>
> Christopher Dimech wrote:
>
> > I concur. Have been suggesting to have a basic version of
> > emacs that would be considered complete and stable, while
> > minimizing the risk of instability due to feature additions.
>
> No one wants a perfect piece of minimal software developed by
> 6 experts around the planet, gets old too fast.
For specific tasks, one does not use emacs in its entirety. Although you're
raising a valid point, couldn't the rest of emacs be added on top of that ?
Would make it easier for some other groups of 6 guys to understand it and do
something with it, without needing years of experience managing all of it.
There would be some value in that.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 3:28 ` Christopher Dimech
2023-08-12 3:48 ` Emanuel Berg
2023-08-12 3:50 ` Emanuel Berg
@ 2023-08-12 6:02 ` Eli Zaretskii
2023-08-12 7:38 ` Christopher Dimech
2 siblings, 1 reply; 247+ messages in thread
From: Eli Zaretskii @ 2023-08-12 6:02 UTC (permalink / raw)
To: Christopher Dimech; +Cc: rms, dmitry, esr, luangruo, emacs-devel
> From: Christopher Dimech <dimech@gmx.com>
> Cc: Dmitry Gutov <dmitry@gutov.dev>, esr@thyrsus.com, luangruo@yahoo.com,
> emacs-devel@gnu.org
> Date: Sat, 12 Aug 2023 05:28:18 +0200
>
> > The rate at whic we have added features already causes significant
> > instability.
>
> I concur. Have been suggesting to have a basic version of emacs that would be considered
> complete and stable, while minimizing the risk of instability due to feature additions.
>
> Every project should have such aim and achieve it. Clearly, the core requirements must
> be well defined, and the scope limited in terms of the essential features to fulfill its
> primary purpose. Maintaining emacs for it to do everything could halt its continued
> development structure. Especially if it cannot be reasonable handled by a few people.
This is exactly what I try to make happen. But I don't claim to have
it all figured out in the best manner, so if someone knows how to do
that better without averting contributors OT1H and without
complicating development even more OTOH, I invite you to step up and
become a (co-)maintainer, and then practice what you preach.
^ permalink raw reply [flat|nested] 247+ messages in thread
* Re: Shrinking the C core
2023-08-12 6:02 ` Eli Zaretskii
@ 2023-08-12 7:38 ` Christopher Dimech
0 siblings, 0 replies; 247+ messages in thread
From: Christopher Dimech @ 2023-08-12 7:38 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: rms, dmitry, esr, luangruo, emacs-devel
> Sent: Saturday, August 12, 2023 at 6:02 PM
> From: "Eli Zaretskii" <eliz@gnu.org>
> To: "Christopher Dimech" <dimech@gmx.com>
> Cc: rms@gnu.org, dmitry@gutov.dev, esr@thyrsus.com, luangruo@yahoo.com, emacs-devel@gnu.org
> Subject: Re: Shrinking the C core
>
> > From: Christopher Dimech <dimech@gmx.com>
> > Cc: Dmitry Gutov <dmitry@gutov.dev>, esr@thyrsus.com, luangruo@yahoo.com,
> > emacs-devel@gnu.org
> > Date: Sat, 12 Aug 2023 05:28:18 +0200
> >
> > > The rate at whic we have added features already causes significant
> > > instability.
> >
> > I concur. Have been suggesting to have a basic version of emacs that would be considered
> > complete and stable, while minimizing the risk of instability due to feature additions.
> >
> > Every project should have such aim and achieve it. Clearly, the core requirements must
> > be well defined, and the scope limited in terms of the essential features to fulfill its
> > primary purpose. Maintaining emacs for it to do everything could halt its continued
> > development structure. Especially if it cannot be reasonable handled by a few people.
>
> This is exactly what I try to make happen. But I don't claim to have
> it all figured out in the best manner
Glad to hear. Although the comment was not a reprehension, other do have
the different idea for emacs to do anything.
> so if someone knows how to do
> that better without averting contributors OT1H and without
> complicating development even more OTOH, I invite you to step up and
> become a (co-)maintainer, and then practice what you preach.
There are already three co-maintainers, how many more are required exactly ?
And with a good number of others who one could consider as experts.
My focus is engaged in practices within a distinct realm that is not being
pursued by others, making your suggestion difficult to keep up. You decide
to use what you want. What I can do is see if I can get others interested,
who could help you, if you want.
Felicitations
Kristinu
^ permalink raw reply [flat|nested] 247+ messages in thread