On Sun, May 09, 2021 at 11:13:36PM +0200, R. Diez wrote: > > EZ> That's not the same. the warning you saw is triggered by a failure to > EZ> convert to the external encoding, so it consumes no extra CPU cycles. > > But it could be, from my (admittedly naive) point of view: > > (convert-to-external-encoding but-with-some-extra-flag-to-warn-about-NUL-chars) > > > EZ> Null bytes will not fail anything, so you should test for them > EZ> explicitly (and in some encodings, like UTF-16, they are necessary and > EZ> cannot be avoided). > > I didn't know that about UTF-16, but I could not find any information about it either. Why is a NUL char necessary in UTF-16 and not UTF-8? UTF-16 [1] encodes characters using 16 bit "packets" called "code units". Like UTF-8, whenever one unit isn't sufficient, you use more. The bit pattern tells you whether there are more to come. In the case of UTF-16 "more" is at most two. For the "small" code points, 8 of those 16 bit are zero. Which one depends on endiannes, but this or that way, you end up with a lot of zero bytes in your text. That's how UTF-16BE (big endian) looks like: tomas@trotzki:~$ echo "hello, world" | iconv -f utf-8 -t UTF-16BE | hexdump -C 00000000 00 68 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 |.h.e.l.l.o.,. .w| 00000010 00 6f 00 72 00 6c 00 64 00 0a |.o.r.l.d..| 0000001a ... so a bit like Swiss cheese. UTF-16 needs a BOM (byte order mark) to disambiguate on endianness. UTF-8 doesn't (is a byte stream), although Microsoftey-applications tend to sneak one in, just to annoy the rest of us. Or something. Cheers - t