I’ve recently being trying to teach myself how parsers (for languages/context-free grammars) work, and most of it seems to be making sense, except for one thing. I’m focusing my attention in particular on

LL(k) grammars, for which the two main algorithms seem to be theLL parser(using stack/parse table) and theRecursive Descent parser(simply using recursion).As far as I can see, the recursive descent algorithm works on all LL(k) grammars and possibly more, whereas an LL parser works on all LL(k) grammars. A recursive descent parser is clearly much simpler than an LL parser to implement, however (just as an LL one is simpler than an LR one).

So my question is, what are the advantages/problems one might encounter when using either of the algorithms? Why might one ever pick LL over recursive descent, given that it works on the same set of grammars and is trickier to implement?

**Answer**

LL is usually a more efficient parsing technique than recursive-descent. In fact, a naive recursive-descent parser will actually be *O(k^n)* (where *n* is the input size) in the worst case. Some techniques such as memoization (which yields a Packrat parser) can improve this as well as extend the class of grammars accepted by the parser, but there is always a space tradeoff. LL parsers are (to my knowledge) always linear time.

On the flip side, you are correct in your intuition that recursive-descent parsers can handle a greater class of grammars than LL. Recursive-descent can handle any grammar which is LL(*) (that is, *unlimited* lookahead) as well as a small set of ambiguous grammars. This is because recursive-descent is actually a directly-encoded implementation of PEGs, or Parser Expression Grammar(s). Specifically, the disjunctive operator (`a | b`

) is not commutative, meaning that `a | b`

does not equal `b | a`

. A recursive-descent parser will try each alternative in order. So if `a`

matches the input, it will succede even if `b`

*would have* matched the input. This allows classic “longest match” ambiguities like the dangling `else`

problem to be handled simply by ordering disjunctions correctly.

With all of that said, it is *possible* to implement an LL(k) parser using recursive-descent so that it runs in linear time. This is done by essentially inlining the predict sets so that each parse routine determines the appropriate production for a given input in constant time. Unfortunately, such a technique eliminates an entire class of grammars from being handled. Once we get into predictive parsing, problems like dangling `else`

are no longer solvable with such ease.

As for why LL would be chosen over recursive-descent, it’s mainly a question of efficiency and maintainability. Recursive-descent parsers are markedly easier to implement, but they’re usually harder to maintain since the grammar they represent does not exist in any declarative form. Most non-trivial parser use-cases employ a parser generator such as ANTLR or Bison. With such tools, it really doesn’t matter if the algorithm is directly-encoded recursive-descent or table-driven LL(k).

As a matter of interest, it is also worth looking into recursive-ascent, which is a parsing algorithm directly encoded after the fashion of recursive-descent, but capable of handling any LALR grammar. I would also dig into parser combinators, which are a functional way of composing recursive-descent parsers together.

**Attribution***Source : Link , Question Author : Noldorin , Answer Author : Daniel Spiewak*