On 03/28/2016 09:32 PM, Dmitry Gutov wrote: > On 03/28/2016 10:27 PM, Lars Magne Ingebrigtsen wrote: > >>> Still seems problematic if your 5 year old takes 2.7s to compute it on >>> a 1GB file. You don't want to freeze for 2s in the normal course of >>> editing just because you happen to cross the "original size" threshold. >> >> Yeah, I don't see any way around that. > > Don't use hashing. Use e.g. buffer-undo-list. We save enough data to return the buffer contents to the previous state, right? It should be possible to detect whether a given sequence of undo-s is a no-op. Alternatively, introduce a threshold above which that hash-based check does not happen, and instead fall back to the old, less complex behaviour in that case.