On Wed, Aug 16, 2023 at 12:33:33AM +0200, Emanuel Berg wrote: > Ihor Radchenko wrote: > > > Yes, but when CBCL is orders of magnitude faster, it > > indicates something conceptually wrong in the algo. > > Indeed, I'll remove it, thanks. > > But my CL skills aren't at that level so someone else added > it. A strange optimization indeed, that breaks the code. It only breaks the code if you "don't know what you are doing". See, without the optimization the code will have, at each and every arithmetic operation, to check "Hmm... Is this thing going to overflow? Hm. It might, so better use bignums. Phew, it didn't, so back to fixnums". Now we know that modern CPU architectures have a hard time with conditional statements (pipeline stalls, cache mispredictions, all that nasty stuff). So this "Hmm..." above is costing real money. Even in cases you won't need it, because things ain't gonna overflow. The compiler tries to do a good job of looking into calculations and deciding "this incf down there won't ever push us over the fixnum limit, because we know we are starting with a number below 10". But the programmer sometimes has more knowledge and can prove that things won't overflow, ever. Or that, should things overflow, it won't matter anyway. It's for those cases that this kind of optimizations are made. C, by the way, always runs in this mode. Unsigned integers will silently wrap around, that's documented behaviour. Signed integers will do whatever their thing is (technically this is called "unspecified behaviour". Perhaps you wanted just to compute fib modulo some big power of two? Then your program was correct, after all... Cheers -- t