Eli Zaretskii <
eliz@gnu.org> schrieb am Mi., 28. Dez. 2016 um 19:35 Uhr:
> From: Philipp Stephani <p.stephani2@gmail.com>
> Date: Wed, 28 Dec 2016 18:18:25 +0000
> Cc: larsi@gnus.org, emacs-devel@gnu.org, kentaro.nakazawa@nifty.com,
> dgutov@yandex.ru
>
> > > That's right -- why should any code care? Yet url.el does.
> >
> > No, it doesn't, not if the string is plain ASCII.
> >
> > But in that case it isn't, it's morally a byte array.
>
> Yes, because the internal representation of characters in Emacs is a
> superset of UTF-8.
>
> That has nothing to do with characters. A byte array is conceptually different from a character string.
In Emacs, they are both implemented using very similar objects.
Yes, that's why I said "conceptually different". The concepts may be the different, but the implementation might still be the same.
> > What Emacs lacks is good support for byte arrays.
>
> Unibyte strings are byte arrays. What do you think we lack in that regard?
>
> If unibyte strings should be used for byte arrays, then the URL functions should indeed signal an error
> whenever url-request-data is a multibyte string, as HTTP requests are conceptually byte arrays, not character
> strings.
Which is what we do now.
There is no such check for url-request-data. There's an overall check for the complete request, but that also doesn't check for unibyte-ness.
> > For HTTP, process-send-string shouldn't need to deal
> > with encoding or EOL conversion, it should just accept a byte array and send that, unmodified.
>
> I disagree. Handling unibyte strings is a nuisance, so Emacs allows
> most applications be oblivious about them, and just handle
> human-readable text.
>
> That is the wrong approach (byte arrays and character strings are fundamentally different types, and mixing
> them together only causes pain), and it cannot work when implementing network protocols. HTTP requests
> are *not* human-readable text, they are byte arrays. Attempting to handle Unicode strings can't work because
> we wouldn't know the number of encoded bytes.
You are arguing against a long and quite painful history of non-ASCII
strings in Emacs. What we have now is based on a lot of experience
and at least two very large refactoring jobs. Going back would be a
very bad idea indeed, as we've been there already, and users didn't
like that. Some of us are old enough to remember the notorious \201
bytes creeping into text files and mail messages, due to that. Never
again.
I'm not suggesting going back, too much would be broken.
Our experience is that we should keep use of unibyte strings in Lisp
application code to the absolute minimum, ideally zero. Once we
arrived at that conclusion, we've been living happily ever after.
This minor issue we are discussing here is certainly not worth
repeating past mistakes for which we paid plenty in sweat and blood.
If you want unibyte strings to represent octet streams, then unibyte strings must be usable in application code, because octet streams are a concept that exists in reality, and applications must be able to support them in some way. If you don't want unibyte strings, then you need to provide some different way to represent octet streams.