unofficial mirror of emacs-devel@gnu.org 
 help / color / mirror / code / Atom feed
From: Philipp Stephani <p.stephani2@gmail.com>
To: Eli Zaretskii <eliz@gnu.org>
Cc: emacs-devel@gnu.org
Subject: Re: String encoding in json.c
Date: Tue, 26 Dec 2017 21:42:54 +0000	[thread overview]
Message-ID: <CAArVCkRAPfumUxmKDCKp_jQPHfJE-C=Ps=1oo9km1TgJDjv1ag@mail.gmail.com> (raw)
In-Reply-To: <83incxjojg.fsf@gnu.org>

[-- Attachment #1: Type: text/plain, Size: 10019 bytes --]

Eli Zaretskii <eliz@gnu.org> schrieb am Sa., 23. Dez. 2017 um 19:19 Uhr:

> > From: Philipp Stephani <p.stephani2@gmail.com>
> > Date: Sat, 23 Dec 2017 17:27:22 +0000
> > Cc: emacs-devel@gnu.org
> >
> > - We encode Lisp strings when passing them to Jansson. Jansson only
> accepts UTF-8 strings and fails (with
> > proper error reporting, not crashing) when encountering non-UTF-8
> strings. I think encoding can only make a
> > difference here for strings that contain sequences of bytes that are
> themselves valid UTF-8 code unit
> > sequences, such as "Ä\xC3\x84". This string is encoded as
> "\xC3\x84\xC3\x84" using utf-8-unix. (Note how
> > this is a case where encoding and decoding are not inverses of each
> other.) Without encoding, the string
> > contents will be \xC3\x84 plus two invalid 5-byte sequences. I think
> it's not obvious at all which interpretation is
> > correct; after all, "Ä\xC3\x84" is not equal to "ÄÄ", but the two
> strings now result in the same JSON
> > representation. This could be at least surprising, and I'd argue that
> the other behavior (raising an error) would
> > be more correct and more obvious.
>
> I think we need to take a step back and decide what would we want to
> do with strings which include raw bytes.  If we pass such strings to
> Jansson, it will just error out, right?


Yes


>   If so, then we could do one
> of the two:
>
>   . Check up front whether a Lisp string includes raw bytes, and if
>     it does, signal an error before even trying to encode it.  I think
>     find_charsets_in_text could be instrumental here; alternatively,
>     we could scan the string using BYTES_BY_CHAR_HEAD, looking for
>     either sequences longer than 4 bytes or 2-byte sequences whose
>     leading bytes are C0 or C1 (these are the raw bytes).
>
>   . Or we could encode the string, pass it to Jansson, and let it
>     error out; then we could produce our own diagnostics.
>

That's what we are currently doing.


>
> Which one of these do you prefer?


The third option: don't encode (pass SDATA directly) because we know that
valid Unicode sequences are represented as valid UTF-8 strings, and invalid
Unicode sequences as invalid UTF-8 strings, and that Jansson behaves
correctly in all cases.
Given otherwise equal behavior, I generally prefer the least complex
option, and "doing nothing" is simpler than "doing something".


> Currently, you opted for the 2nd
> one.  It is not clear to me that the option you've chosen is better,
> since (a) it relies on Jansson,


That's fine, because we only rely on documented and tested behavior. Doing
so is generally OK; if we couldn't rely on documented behavior, we couldn't
use external libraries (including glibc) at all.


> and (b) it encodes strings which don't
> need to be encoded.


True, that's why I argue we should remove the encoding step.


> OTOH, the check I propose in (a) means penalty
> for every caller.  But then such penalties never averted you elsewhere
> in your code, so I wonder why this case is suddenly so different?
>

I generally prefer interface clarity and defensive programming, i.e. I
don't want to introduce undefined behavior on unexpected user input, and I
prefer signaling errors over silently doing something subtly wrong. But
here the Jansson library already performs all the checks we need, so we
don't need to add equivalent duplicate checks.


>
> It is true that if we believe Jansson's detection of invalid UTF-8,
> and we assume that raw bytes in their current representation will
> forever the only extensions of UTF-8 in Emacs, we could pass the
> internal representation to Jansson.  Personally, I'm not sure we
> should make such assumptions, but that's me.
>

I think it's fine to make such assumptions.
- Jansson documents how it handles invalid UTF-8.
- Jansson includes multiple test cases that check for the behavior on
encountering invalid UTF-8.
- Emacs itself now also includes multiple test cases for such inputs.
- Jansson gets high scores in the nativejson-benchmark conformance tests
(the remaining failures are corner cases involving real numbers, which are
arguably not true errors and don't affect string handling).
- We don't need to assume that Emacs's internal encoding stays
UTF-8-compatible forever, but we can still rely on it. Given the importance
and widespread use of UTF-8, it's unlikely that our internal encoding will
have to change to something else within the next couple of years. Even if
the need to change the encoding should arise, the existing regression tests
should alert us immediately about what needs to change.
Emacs is a relatively monolithic codebase, where it's common for some
compilation units to rely on implementation details of other compilation
units. That's not super great, but also not a strong reason to artificially
restrict ourselves from using global knowledge about fundamental data types
such as strings. We expose SDATA and SBYTES in lisp.h, so why can't we say
what the bytes at SDATA actually contain?


>
> > - We decode UTF-8 strings after receiving them from Jansson. Jansson
> guarantees to only ever emit
> > well-formed UTF-8. Given that for well-formed UTF-8 strings, the UTF-8
> representation and the Emacs
> > representation are one and the same, we don't need decoding.
>
> Once again: do we really want to rely on external libraries to always
> DTRT and be bug-free?


Yes, we need to do that, otherwise we couldn't use external libraries at
all.


>   We don't normally rely on external sources like
> that.


We do so all the time. For example, we rely on malloc(123) actually
returning either NULL or a memory block of at least 123 bytes.


> The cost of decoding is not too high;


It's not extremely high, but significant. Users of JSON serialization such
as the Language Server Protocol or YCM regularly encode and decode large
JSON objects on every keystroke, so we need JSON functions to be fast. If
we can speed them up by *removing* code (and thus complexity), then we
should do it.


> the price users will pay
> for Jansson's bugs will be much higher.
>

We shouldn't add workarounds for bugs just because they could potentially
happen in the future. True, bugs are possible in any library, but we might
as well hit a bug in malloc, which would be far more disastrous, and we
don't proactively attempt to work around theoretical malloc bugs. If and
when we encounter a serialization bug in Jansson that would produce invalid
UTF-8, I'm more than happy to add workarounds, but not for non-existing
bugs.


>
> >    And second, encoding keeps the
> >  encoding intact precisely because it is not a no-op: raw bytes are
> >  held in buffer and string text as special multibyte sequences, not as
> >  single bytes, so just copying them to output instead of encoding will
> >  produce non-UTF-8 multibyte sequences.
> >
> > That's the correct behavior, I think. JSON values must be valid Unicode
> strings, and raw bytes are not.
>
> Neither are the internal representations of raw bytes, so what's your
> point here?
>

The point is that encoding a multibyte string containing a sequence of two
raw bytes can produce a valid UTF-8 string, while using the bytes directly
cannot.


>
> >  >   /* We need to send a valid UTF-8 string.  We could encode `object'
> >  >      but by not encoding it, we guarantee it's valid utf-8, even if
> >  >      it contains eight-bit-bytes.  Of course, you can still send
> >  >      manually-crafted junk by passing a unibyte string.  */
> >
> >  If gnutls.c and dbusbind.c don't encode and decode text that comes
> >  from and goes to outside, then they are buggy.
> >
> > Not necessarily. As mentioned, the internal encoding of multibyte
> strings is even mentioned in the Lisp
> > reference; and the above comment indicates that it's OK to use that
> information at least within the Emacs
> > codebase.
>
> I think that comment is based on a mistake, or maybe I don't really
> understand it.  Internal representation is not in general valid UTF-8,
> that's for sure.
>

Agreed, the comment should at least be reworded, e.g. "If OBJECT is a
well-formed Unicode scalar value sequence, the unencoded bytes range is a
valid UTF-8 string, so we don't need to encode it. If OBJECT is not
well-formed or unibyte, the function will return EINVAL instead of
exhibiting undefined behavior."


>
> And the fact that the internal representation is documented doesn't
> mean we can draw the conclusions like that.


Why? Clearly we can make use of documented information?


> For starters, the
> documentation doesn't tell all the story: the 2-byte representation of
> raw bytes is not described there.
>

What's the 2-byte representation?


>
> > Some parts are definitely encoded, but for example, there is c_hostname
> in Fgnutls_boot, which doesn't
> > encode the user-supplied string.
>
> That's a bug.
>

Maybe, maybe not. gnutls_server_name_set explicitly documents that the
hostname is interpreted as UTF-8 (presumably even on Windows), so if we can
rely on the UTF-8-ness of strings not encoding it is OK.


>
> >  Well, I disagree with that conclusion.  Just look at all the calls to
> >  decode_coding_*, encode_coding_*, DECODE_SYSTEM, ENCODE_SYSTEM, etc.,
> >  and you will see where we do that.
> >
> > We obviously do *some* encoding/decoding. But when interacting with
> third-party libraries, we seem to leave
> > it out pretty frequently, if those libraries use UTF-8 as well.
>
> Most if not all of those places are just bugs.  People who work mostly
> on GNU/Linux tend to forget that not everything is UTF-8.
>

Definitely true for files and processes, but if an API (such as GnuTLS or
Jansson) explicitly documents that it expects UTF-8, then we should be able
to rely on that.

[-- Attachment #2: Type: text/html, Size: 13301 bytes --]

  reply	other threads:[~2017-12-26 21:42 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-23 14:26 String encoding in json.c Philipp Stephani
2017-12-23 14:43 ` Eli Zaretskii
2017-12-23 15:31   ` Philipp Stephani
2017-12-23 15:53     ` Eli Zaretskii
2017-12-23 17:27       ` Philipp Stephani
2017-12-23 18:18         ` Eli Zaretskii
2017-12-26 21:42           ` Philipp Stephani [this message]
2017-12-27 16:08             ` Eli Zaretskii
2017-12-24 20:48   ` Dmitry Gutov
2017-12-25 16:21     ` Eli Zaretskii
2017-12-25 20:51       ` Dmitry Gutov
2017-12-26  4:35         ` Eli Zaretskii
2017-12-26 21:50           ` Philipp Stephani
2017-12-27  2:00             ` Dmitry Gutov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/emacs/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAArVCkRAPfumUxmKDCKp_jQPHfJE-C=Ps=1oo9km1TgJDjv1ag@mail.gmail.com' \
    --to=p.stephani2@gmail.com \
    --cc=eliz@gnu.org \
    --cc=emacs-devel@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/emacs.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).