While preparing and installing the attached patch to Emacs master, I noticed that the byte optimizer assumes that floating-point arithmetic behaves the same at compile-time that it does at runtime. For example, on my x86-64 platform the byte compiler optimizes (/ 0 0.0) to -NaN, (- 1 1.0000000000000001) to 0.0, and (< 1 1.0000000000000001) to nil, even though these expressions will evaluate to different values on (say) an IBM mainframe that uses IBM floating point, and the first expression yields +NaN on some IEEE platforms (e.g., ARM). These discrepancies mean that .elc files containing floating-point constants might not be platform-independent, in that byte-compiling a file on one machine X and running it on another machine Y can yield different results than byte-compiling and running the same file on Y. Is this sort of discrepancy intended? If so, should it be documented in the Emacs Lisp manual? On the one hand, I doubt whether this sort of optimization buys us much performance; on the other, I doubt whether many users care about the discrepancies.