all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
* bug#19565: Emacs vulnerable to endless-data attack (minor)
@ 2015-01-11 11:12 Kelly Dean
  2015-01-11 18:33 ` Richard Stallman
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Kelly Dean @ 2015-01-11 11:12 UTC (permalink / raw)
  To: 19565

A few days ago I speculated, but now I confirmed. It's technically considered a vulnerability, but in Emacs's case it's a minor problem; exploiting it would be more a prank than a real attack.

To demo locally for archive metadata:
echo -en 'HTTP/1.1 200 OK\r\n\r\n' > header
cat header /dev/urandom | nc -l -p 80

Then in Emacs:
(setq package-archives '(("foo" . "http://127.0.0.1/")))
M-x list-packages

Watch Emacs's memory usage grow and grow...

If you set some arbitrary limit on the size of archive-contents, then theoretically you break some legitimate ginormous elpa. And if you're getting garbage, you wouldn't know it until you've downloaded more garbage than the limit. The right way to fix it is to include the size of archive-contents in another file that can legitimately be constrained to a specified small maximum size, sign that file, and in the client, abort the archive-contents download if you get more data than you're supposed to.

The timestamp file that I proposed for fixing the metadata replay vuln (bug #19479) would be a suitable place to record the size; then no additional file (and signature) is needed just to solve endless-metadata. For the corresponding endless-data vuln for packages instead of metadata, I already put sizes in the package records in my patch for the package replay vuln.

Don't forget you need to set a maximum size not only on the timestamp file, but also on the signature file, or they would be vulnerable too. E.g. just hardcode 1kB.





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2015-01-11 11:12 bug#19565: Emacs vulnerable to endless-data attack (minor) Kelly Dean
@ 2015-01-11 18:33 ` Richard Stallman
  2015-01-11 21:18 ` Kelly Dean
  2019-10-06  3:13 ` Stefan Kangas
  2 siblings, 0 replies; 14+ messages in thread
From: Richard Stallman @ 2015-01-11 18:33 UTC (permalink / raw)
  To: Kelly Dean; +Cc: 19565

[[[ To any NSA and FBI agents reading my email: please consider    ]]]
[[[ whether defending the US Constitution against all enemies,     ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]

  > A few days ago I speculated, but now I confirmed. It's technically considered a vulnerability, but in Emacs's case it's a minor problem; exploiting it would be more a prank than a real attack.

That is a relief.

-- 
Dr Richard Stallman
President, Free Software Foundation
51 Franklin St
Boston MA 02110
USA
www.fsf.org  www.gnu.org
Skype: No way! That's nonfree (freedom-denying) software.
  Use Ekiga or an ordinary phone call.






^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2015-01-11 11:12 bug#19565: Emacs vulnerable to endless-data attack (minor) Kelly Dean
  2015-01-11 18:33 ` Richard Stallman
@ 2015-01-11 21:18 ` Kelly Dean
  2019-10-06  3:13 ` Stefan Kangas
  2 siblings, 0 replies; 14+ messages in thread
From: Kelly Dean @ 2015-01-11 21:18 UTC (permalink / raw)
  To: 19565

If Emacs gets an auto-updater, or even an auto-checker for updates, like some common operating systems and web browsers have, then this bug would become an actual problem, enabling denial-of-service attacks. Since Emacs is an OS, and now a web browser too, it might get an auto-updater or auto-checker.

Even so, it would only enable DOS attacks, nothing more.





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2015-01-11 11:12 bug#19565: Emacs vulnerable to endless-data attack (minor) Kelly Dean
  2015-01-11 18:33 ` Richard Stallman
  2015-01-11 21:18 ` Kelly Dean
@ 2019-10-06  3:13 ` Stefan Kangas
  2019-10-06 17:32   ` Eli Zaretskii
  2 siblings, 1 reply; 14+ messages in thread
From: Stefan Kangas @ 2019-10-06  3:13 UTC (permalink / raw)
  To: Lars Ingebrigtsen; +Cc: 19565

Kelly Dean <kelly@prtime.org> writes:

> A few days ago I speculated, but now I confirmed. It's technically considered a vulnerability, but in Emacs's case it's a minor problem; exploiting it would be more a prank than a real attack.
>
> To demo locally for archive metadata:
> echo -en 'HTTP/1.1 200 OK\r\n\r\n' > header
> cat header /dev/urandom | nc -l -p 80
>
> Then in Emacs:
> (setq package-archives '(("foo" . "http://127.0.0.1/")))
> M-x list-packages
>
> Watch Emacs's memory usage grow and grow...
>
> If you set some arbitrary limit on the size of archive-contents, then
> theoretically you break some legitimate ginormous elpa. And if you're getting
> garbage, you wouldn't know it until you've downloaded more garbage than the
> limit. The right way to fix it is to include the size of archive-contents in
> another file that can legitimately be constrained to a specified small maximum
> size, sign that file, and in the client, abort the archive-contents download if
> you get more data than you're supposed to.
>
> The timestamp file that I proposed for fixing the metadata replay vuln (bug
> #19479) would be a suitable place to record the size; then no additional file
> (and signature) is needed just to solve endless-metadata. For the corresponding
> endless-data vuln for packages instead of metadata, I already put sizes in the
> package records in my patch for the package replay vuln.
>
> Don't forget you need to set a maximum size not only on the timestamp file, but also on the signature file, or they would be vulnerable too. E.g. just hardcode 1kB.

I think this affects more than just package.el.  AFAICT, anywhere we
use the url library, an endless data attack can get Emacs to fill up
all available memory (wasting also bandwidth resources, of course).

Lars, perhaps we could add code to handle this in with-fetched-url?

For example, a new keyword argument :max-size, which would make it
stop after having reached that many bytes.  IMO, it would be even
better if this was set to some arbitrarily chosen high value by
default, like 256 MiB or something, so that this protection is on
unless explicitly turned off with nil.

Best regards,
Stefan Kangas





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-06  3:13 ` Stefan Kangas
@ 2019-10-06 17:32   ` Eli Zaretskii
  2019-10-07  1:51     ` Lars Ingebrigtsen
  0 siblings, 1 reply; 14+ messages in thread
From: Eli Zaretskii @ 2019-10-06 17:32 UTC (permalink / raw)
  To: Stefan Kangas; +Cc: larsi, 19565

> From: Stefan Kangas <stefan@marxist.se>
> Date: Sun, 6 Oct 2019 05:13:27 +0200
> Cc: 19565@debbugs.gnu.org
> 
> I think this affects more than just package.el.  AFAICT, anywhere we
> use the url library, an endless data attack can get Emacs to fill up
> all available memory (wasting also bandwidth resources, of course).

At which point the system will kill the Emacs process.  Why is that a
problem we need to work, given that we already have at least some
protection against stack overflows and running out of memory?

> For example, a new keyword argument :max-size, which would make it
> stop after having reached that many bytes.

The Gnu Coding Standards frown on having arbitrary limits in a
program.  So this could only work if we had some reasonable way of
computing a limit that is not arbitrary.





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-06 17:32   ` Eli Zaretskii
@ 2019-10-07  1:51     ` Lars Ingebrigtsen
  2019-10-07 12:50       ` Stefan Kangas
  2019-10-07 16:13       ` Eli Zaretskii
  0 siblings, 2 replies; 14+ messages in thread
From: Lars Ingebrigtsen @ 2019-10-07  1:51 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Stefan Kangas, 19565

Eli Zaretskii <eliz@gnu.org> writes:

>> I think this affects more than just package.el.  AFAICT, anywhere we
>> use the url library, an endless data attack can get Emacs to fill up
>> all available memory (wasting also bandwidth resources, of course).
>
> At which point the system will kill the Emacs process.  Why is that a
> problem we need to work, given that we already have at least some
> protection against stack overflows and running out of memory?

It's not something we have to do, but it would be nice to have some
protection against this.

>> For example, a new keyword argument :max-size, which would make it
>> stop after having reached that many bytes.
>
> The Gnu Coding Standards frown on having arbitrary limits in a
> program.  So this could only work if we had some reasonable way of
> computing a limit that is not arbitrary.

I think it would perhaps make some sense to warn (or query) the user if
you get more data than `large-file-warning-threshold'.  I think it would
be pretty trivial to implement -- at least in the new with-fetched-url
interface, which I think is where this pretty theoretical problem is
least theoretical, perhaps?

On the other hand, I could see that in some ways it would be easier to
implement in wait_reading_process_output: We could just maintain a byte
counter in the process objects (if we don't do that already) and have a
callback we call if that counter grows larger than
`large-file-warning-threshold'.

That way Emacs wouldn't be open to flooding from, say, rogue SMTP
servers, either.

-- 
(domestic pets only, the antidote for overdose, milk.)
   bloggy blog: http://lars.ingebrigtsen.no





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-07  1:51     ` Lars Ingebrigtsen
@ 2019-10-07 12:50       ` Stefan Kangas
  2019-10-07 16:13       ` Eli Zaretskii
  1 sibling, 0 replies; 14+ messages in thread
From: Stefan Kangas @ 2019-10-07 12:50 UTC (permalink / raw)
  To: Lars Ingebrigtsen; +Cc: 19565

Lars Ingebrigtsen <larsi@gnus.org> writes:

> It's not something we have to do, but it would be nice to have some
> protection against this.

This is my view, too.  And don't we usually treat a potential crash as
a bug to be fixed?

> I think it would perhaps make some sense to warn (or query) the user if
> you get more data than `large-file-warning-threshold'.  I think it would
> be pretty trivial to implement -- at least in the new with-fetched-url
> interface, which I think is where this pretty theoretical problem is
> least theoretical, perhaps?

Not sure if it's practical, but perhaps we could initialize the
threshold depending on the available memory.

> On the other hand, I could see that in some ways it would be easier to
> implement in wait_reading_process_output: We could just maintain a byte
> counter in the process objects (if we don't do that already) and have a
> callback we call if that counter grows larger than
> `large-file-warning-threshold'.
>
> That way Emacs wouldn't be open to flooding from, say, rogue SMTP
> servers, either.

If we can have a more general protection, that would be even better,
in my view.  Are there any drawbacks to such a solution?

Best regards,
Stefan Kangas





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-07  1:51     ` Lars Ingebrigtsen
  2019-10-07 12:50       ` Stefan Kangas
@ 2019-10-07 16:13       ` Eli Zaretskii
  2019-10-08 16:27         ` Lars Ingebrigtsen
  1 sibling, 1 reply; 14+ messages in thread
From: Eli Zaretskii @ 2019-10-07 16:13 UTC (permalink / raw)
  To: Lars Ingebrigtsen; +Cc: stefan, 19565

> From: Lars Ingebrigtsen <larsi@gnus.org>
> Cc: Stefan Kangas <stefan@marxist.se>,  19565@debbugs.gnu.org
> Date: Mon, 07 Oct 2019 03:51:35 +0200
> 
> I think it would perhaps make some sense to warn (or query) the user if
> you get more data than `large-file-warning-threshold'.  I think it would
> be pretty trivial to implement -- at least in the new with-fetched-url
> interface, which I think is where this pretty theoretical problem is
> least theoretical, perhaps?
> 
> On the other hand, I could see that in some ways it would be easier to
> implement in wait_reading_process_output: We could just maintain a byte
> counter in the process objects (if we don't do that already) and have a
> callback we call if that counter grows larger than
> `large-file-warning-threshold'.

I think this must be in terms of bytes/sec, not just bytes.  E.g., I
have a spell-checker active during my entire Emacs session (which
could go on for weeks and months on end), and I don't want to get a
prompt just because the number of bytes that went in that pipe becomes
above the threshold.  We may also need to measure the growth of the
Emacs memory footprint during that time, because if Emacs reads bytes
and discards them, it isn't going to be a problem, right?





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-07 16:13       ` Eli Zaretskii
@ 2019-10-08 16:27         ` Lars Ingebrigtsen
  2019-10-08 16:47           ` Eli Zaretskii
  2019-10-08 16:50           ` Stefan Kangas
  0 siblings, 2 replies; 14+ messages in thread
From: Lars Ingebrigtsen @ 2019-10-08 16:27 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: stefan, 19565

Eli Zaretskii <eliz@gnu.org> writes:

> I think this must be in terms of bytes/sec, not just bytes.  E.g., I
> have a spell-checker active during my entire Emacs session (which
> could go on for weeks and months on end), and I don't want to get a
> prompt just because the number of bytes that went in that pipe becomes
> above the threshold.  We may also need to measure the growth of the
> Emacs memory footprint during that time, because if Emacs reads bytes
> and discards them, it isn't going to be a problem, right?

Yeah, that's true -- a counter wouldn't help at all here.

Would checking the size of the `process-buffer' of the process be more
helpful?  It might be a somewhat unnatural thing to do -- Emacs doesn't
give you a warning if you say

(dotimes (i 100000000) (insert (make-string 80 ?a)))

so perhaps that's not a good heuristic, either.

So bytes/sec, as you suggest, may be the best heuristic.  But it should
only kick in after having received a large number of bytes, probably.

-- 
(domestic pets only, the antidote for overdose, milk.)
   bloggy blog: http://lars.ingebrigtsen.no





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-08 16:27         ` Lars Ingebrigtsen
@ 2019-10-08 16:47           ` Eli Zaretskii
  2019-10-08 16:50           ` Stefan Kangas
  1 sibling, 0 replies; 14+ messages in thread
From: Eli Zaretskii @ 2019-10-08 16:47 UTC (permalink / raw)
  To: Lars Ingebrigtsen; +Cc: stefan, 19565

> From: Lars Ingebrigtsen <larsi@gnus.org>
> Cc: stefan@marxist.se,  19565@debbugs.gnu.org
> Date: Tue, 08 Oct 2019 18:27:15 +0200
> 
> So bytes/sec, as you suggest, may be the best heuristic.  But it should
> only kick in after having received a large number of bytes, probably.

Yes, I agree.  So maybe make it kick in once the process buffer is
large enough?  And even here we will need to consider, say, shell and
term.el buffers, which could grow quite large.






^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-08 16:27         ` Lars Ingebrigtsen
  2019-10-08 16:47           ` Eli Zaretskii
@ 2019-10-08 16:50           ` Stefan Kangas
  2019-10-08 17:22             ` Eli Zaretskii
  1 sibling, 1 reply; 14+ messages in thread
From: Stefan Kangas @ 2019-10-08 16:50 UTC (permalink / raw)
  To: Lars Ingebrigtsen; +Cc: 19565

Lars Ingebrigtsen <larsi@gnus.org> writes:

> So bytes/sec, as you suggest, may be the best heuristic.  But it should
> only kick in after having received a large number of bytes, probably.

Maybe this is a stupid question, but what if I'm on a slow connection?
 Then I would never hit the max?  Emacs does have users also in areas
of the world where the connections are generally slow, but where AFAIK
in addition to that they may have to pay for data.  Also consider the
use case of a user from the developed world currently on data roaming,
with a maximum of 100 MiB of free data...

I'm not against the bytes/sec idea, and maybe I don't understand it
well enough, but I also think there is a case for being able to
specify a maximum number of bytes for a particular connection.  For
example, the "archive-contents" file is never that big unless
something is seriously wrong.  The MELPA "archive-contents" file is
probably one of the biggest examples in use today and currently weighs
in at 1,433,186 bytes.  This means that a maximum of, say, 128 MiB
should be extremely generous in this case, also allowing for it to
grow quite a lot in the next decade or so.

Best regards,
Stefan Kangas





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-08 16:50           ` Stefan Kangas
@ 2019-10-08 17:22             ` Eli Zaretskii
  2019-10-08 17:38               ` Stefan Kangas
  0 siblings, 1 reply; 14+ messages in thread
From: Eli Zaretskii @ 2019-10-08 17:22 UTC (permalink / raw)
  To: Stefan Kangas; +Cc: larsi, 19565

> From: Stefan Kangas <stefan@marxist.se>
> Date: Tue, 8 Oct 2019 18:50:22 +0200
> Cc: Eli Zaretskii <eliz@gnu.org>, 19565@debbugs.gnu.org
> 
> Lars Ingebrigtsen <larsi@gnus.org> writes:
> 
> > So bytes/sec, as you suggest, may be the best heuristic.  But it should
> > only kick in after having received a large number of bytes, probably.
> 
> Maybe this is a stupid question, but what if I'm on a slow connection?

Please define "slow" in terms of bytes/sec.





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-08 17:22             ` Eli Zaretskii
@ 2019-10-08 17:38               ` Stefan Kangas
  2019-10-08 18:02                 ` Eli Zaretskii
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Kangas @ 2019-10-08 17:38 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Lars Ingebrigtsen, 19565

Eli Zaretskii <eliz@gnu.org> writes:

> > > So bytes/sec, as you suggest, may be the best heuristic.  But it should
> > > only kick in after having received a large number of bytes, probably.
> >
> > Maybe this is a stupid question, but what if I'm on a slow connection?
>
> Please define "slow" in terms of bytes/sec.

- 56 kbps dialup is 7000 bytes/sec.
- 2G cellular network is 40 kbps, or 384 kbps, which is 5000 bytes/sec
and 48000 bytes/sec respectively.

These are theoretical maximums.

Best regards,
Stefan Kangas





^ permalink raw reply	[flat|nested] 14+ messages in thread

* bug#19565: Emacs vulnerable to endless-data attack (minor)
  2019-10-08 17:38               ` Stefan Kangas
@ 2019-10-08 18:02                 ` Eli Zaretskii
  0 siblings, 0 replies; 14+ messages in thread
From: Eli Zaretskii @ 2019-10-08 18:02 UTC (permalink / raw)
  To: Stefan Kangas; +Cc: larsi, 19565

> From: Stefan Kangas <stefan@marxist.se>
> Date: Tue, 8 Oct 2019 19:38:40 +0200
> Cc: Lars Ingebrigtsen <larsi@gnus.org>, 19565@debbugs.gnu.org
> 
> > > Maybe this is a stupid question, but what if I'm on a slow connection?
> >
> > Please define "slow" in terms of bytes/sec.
> 
> - 56 kbps dialup is 7000 bytes/sec.
> - 2G cellular network is 40 kbps, or 384 kbps, which is 5000 bytes/sec
> and 48000 bytes/sec respectively.

I see no problem with these numbers.  If a process buffer receives
more than some threshold at speed like these or faster, we can prompt
the user.





^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-10-08 18:02 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-11 11:12 bug#19565: Emacs vulnerable to endless-data attack (minor) Kelly Dean
2015-01-11 18:33 ` Richard Stallman
2015-01-11 21:18 ` Kelly Dean
2019-10-06  3:13 ` Stefan Kangas
2019-10-06 17:32   ` Eli Zaretskii
2019-10-07  1:51     ` Lars Ingebrigtsen
2019-10-07 12:50       ` Stefan Kangas
2019-10-07 16:13       ` Eli Zaretskii
2019-10-08 16:27         ` Lars Ingebrigtsen
2019-10-08 16:47           ` Eli Zaretskii
2019-10-08 16:50           ` Stefan Kangas
2019-10-08 17:22             ` Eli Zaretskii
2019-10-08 17:38               ` Stefan Kangas
2019-10-08 18:02                 ` Eli Zaretskii

Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.