Earley parsing in cubic time
Wait, isn’t the point of Earley parsing that it’s O(n3)? Actually, no, for a grammar of size G, Earley parsing an input of length n takes O(Gn3). How could you even possibly parse in time independent of the size of the grammar? Consider regular expressions: you can run them in O(Gn) with an NFA, but if you precompute a DFA you can run them in O(n), independent of the size of the original regular expression. Is such a thing possible for context free grammars? It turns out that yes, this is possible.
Usually Earley parsing is done one Earley item at a time, but the paper Practical Earley Parsing explains how to use LR(0) states as Earley items. This doesn’t yet make it O(n3), but it’s a step in the right direction. We need a way to merge LR(0) states with the same index (i,j), and a way to run the completer for many completed items at the same time. Then we’ll need to carefully analyse how the worklist in Earley’s algorithm adds new Earley items, and perform those updates with sets at a time in such a way that we don’t need a worklist any more. Here’s what that looks like in Python:
(sorry, I had to codegolf that to make it fit in a slide – how those functions work isn’t so interesting anyway, it’s similar to an LR(0) automaton)
The functions completerC, completerI, completerC_closure, and predictor work on LR(0) states I and sets of completed items C. You could hashcons & memoize them to build parse tables lazily, or you can precompute those parse tables. In either case, they’d run in O(1) eventually. The main loop of the algorithm is just a bunch of nested loops. The highest nesting level is 3, so it runs in O(n3) if those functions are memoized to run in O(1).
You can see the algorithm in action on four examples here, here, here, and here. Use the left and right arrow keys on your keyboard to step forward and backward. Enjoy!