Please have a look at the second patch that applies on top of the first one. This time I added after-change hooks, so if you create a parser for a buffer and edit that buffer, the parser is kept updated lazily. In summary, the parser parses the whole buffer on the first time when the user asks for the parse tree. In after-change-hook, no parsing is done, but we do update the trees with position changes. On the next time when the user asks for the parse tree, the whole buffer is re-parsed incrementally. (I didn’t read the paper, but I assume it knows where are the bits to re-parse because we updated the tree with position changes.) Maybe this is not lazy enough, and I should do a benchmark. This is a simple benchmark that I did: Benchmark 1: 22M json file, opened in literary mode, try parse the whole buffer, took 17s and uses 3G memory. Benchmark2: 1.6M json file, opened in fundamental mode, first parsed the whole buffer, took 1.039s, no gc. Then ran this: (benchmark-run 1000 (dotimes (_ 1000) (insert "1,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,\n")) (dotimes (_ 1000) (backward-delete-char (length "1,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,\n")))) Result: (39.302071 8 4.3011029999999995) and many gc trimming. Then removes the parser, ran again, Result: (33.589416 8 4.405495999999999) No parsing is done in either run (because parsing is lazy, and I didn’t ask for the parse tree). The only difference is that, in the first run, after-change-hook updates the tree with position change. My conclusion is that after-change-hook is pretty insignificant, and the initial parse is a bit slow (on large files). I’m running this on a 1.4 GHz Quad-Core Intel Core i5 with 16G memory. Of course, I’m open to suggestions for a better benchmark. The amateur log of the benchmark is in benchmark.el. The json file I used in the second benchmark is benchmark.2.json. The patch is ts.2.patch.