`ñ'. (For the non-Mule-implementers, this hack works without Mule but won't work in Mule because Mule matches those two trailing fields to the character's charset, and 241 corresponds to a Latin-1 character, so a "-*-*-*-*-*-*-*-*-*-*-*-*-iso8859-1" font from the set associated with the default face will be used.) For this reason, using char-int and int-char in XEmacs is generally a bug unless you want to examine the internal coding system; you almost always want to use make-char. (Of course for ASCII values it's an accepted idiom, but still a bad habit.) > AFAIK most of the programming errors we've had to deal with over the > years (i.e. in Emacs-20, 21, 22) had to do with incorrect (or missing) > encoding/decoding and most of those errors existed just as much on > XEmacs I don't think that's true; AFAIK we have *no* recorded instances of the \201 bug, while that regression persisted in GNU Emacs (albeit a patched version, at first) from at the latest 1992 until just a few years ago. I think it got fixed in Mule (ie, all paths into or out of a text object got a coding stage) before that was integrated into XEmacs or Emacs, and the regression when Mule was integrated into Emacs was cause by the performance hack, "text object as unibyte". > because there's no way to fix them right in the infrastructure code > (tho XEmacs may have managed to hide them better by detecting the > lack of encoding/decoding and guessing an appropriate coding-system > instead). I don't know of any such guessing. When the user asks us to, we guess on input, just as you do, but once we've got text in internal format, there is no more guessing to be done. Emacs will encounter the need to guess because you support "text object as unibyte". Vive la difference technical! ;-)