Kenichi Handa writes: > detect-coding-string doesn't return all possible coding systems, but > returns a possible coding systems Emacs may automatically detect in > the current language environment. Ah, I see. Do you know of any other way to decide if using a given coding system for decoding a string would give a valid result? A function similar to this would be really useful: (defun possible-coding-system-for-string-p (str coding-system) "Return t if CODING-SYSTEM is a possible coding system for decoding STR." ...) The issue comes from a discussion on the Gnus development list (I've included one of the messages from that thread below). Gnus does not work very well when using CVS Emacs in an UTF-8 locale, because a lot of non-MIME capable clients don't include proper charset information. This causes Gnus to decode many Latin-1 strings as UTF-8. It would help a lot if we could detect that a string cannot possibly be encoded in UTF-8. I know that it's not always possible to distinguish, but just detecting strings that are invalid as UTF-8 would be very helpful. This doesn't just apply to UTF-8 but to any coding system, of course. > But the docstring of detect-coding-system is surely not > good. I've just changed the first paragraph as this. How > is it? It's good.