What is the difference between LR, SLR, and LALR parsers?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP



What is the difference between LR, SLR, and LALR parsers?



What is the actual difference between LR, SLR, and LALR parsers? I know that SLR and LALR are types of LR parsers, but what is the actual difference as far as their parsing tables are concerned?



And how to show whether a grammar is LR, SLR, or LALR? For an LL grammar we just have to show that any cell of the parsing table should not contain multiple production rules. Any similar rules for LALR, SLR, and LR?



For example, how can we show that the grammar


S --> Aa | bAc | dc | bda
A --> d



is LALR(1) but not SLR(1)?



EDIT (ybungalobill): I didn't get a satisfactory answer for what's the difference between LALR and LR. So LALR's tables are smaller in size but it can recognize only a subset of LR grammars. Can someone elaborate more on the difference between LALR and LR please? LALR(1) and LR(1) will be sufficient for an answer. Both of them use 1 token look-ahead and both are table driven! How they are different?





well, even I'm looking for a proper answer on this, LALR(1) is just a slight modification of LR(1), where the table size is reduced so that we can minimize the memory usage ...
– vikkyhacks
Sep 9 '14 at 18:01




7 Answers
7



SLR, LALR and LR parsers can all be implemented using exactly the same table-driven machinery.



Fundamentally, the parsing algorithm collects the next input token T, and consults the current state S (and associated lookahead, GOTO, and reduction tables) to decide what to do:



So, if they all the use the same machinery, what's the point?



The purported value in SLR is its simplicity in implementation; you don't have to scan through the possible reductions checking lookahead sets because there is at most one, and this is the only viable action if there are no SHIFT exits from the state. Which reduction applies can be attached specifically to the state, so the SLR parsing machinery doesn't have to hunt for it. In practice L(AL)R parsers handle a usefully larger set of langauges, and is so little extra work to implement that nobody implements SLR except as an academic exercise.



The difference between LALR and LR has to do with the table generator. LR parser generators keep track of all possible reductions from specific states and their precise lookahead set; you end up with states in which every reduction is associated with its exact lookahead set from its left context. This tends to build rather large sets of states. LALR parser generators are willing to combine states if the GOTO tables and lookhead sets for reductions are compatible and don't conflict; this produces considerably smaller numbers of states, at the price of not be able to distinguish certain symbol sequences that LR can distinguish. So, LR parsers can parse a larger set of languages than LALR parsers, but have very much bigger parser tables. In practice, one can find LALR grammars which are close enough to the target langauges that the size of the state machine is worth optimizing; the places where the LR parser would be better is handled by ad hoc checking outside the parser.



So: All three use the same machinery. SLR is "easy" in the sense that you can ignore a tiny bit of the machinery but it is just not worth the trouble. LR parses a broader set of langauges but the state tables tend to be pretty big. That leaves LALR as the practical choice.



Having said all this, it is worth knowing that GLR parsers can parse any context free language, using more complicated machinery but exactly the same tables (including the smaller version used by LALR). This means that GLR is strictly more powerful than LR, LALR and SLR; pretty much if you can write a standard BNF grammar, GLR will parse according to it. The difference in the machinery is that GLR is willing to try multiple parses when there are conflicts between the GOTO table and or lookahead sets. (How GLR does this efficiently is sheer genius [not mine] but won't fit in this SO post).



That for me is an enormously useful fact. I build program analyzers and code transformers and parsers are necessary but "uninteresting"; the interesting work is what you do with the parsed result and so the focus is on doing the post-parsing work. Using GLR means I can relatively easily build working grammars, compared to hacking a grammar to get into LALR usable form. This matters a lot when trying to deal to non-academic langauges such as C++ or Fortran, where you literally needs thousands of rules to handle the entire language well, and you don't want to spend your life trying to hack the grammar rules to meet the limitations of LALR (or even LR).



As a sort of famous example, C++ is considered to be extremely hard to parse... by guys doing LALR parsing. C++ is straightforward to parse using GLR machinery using pretty much the rules provided in the back of the C++ reference manual. (I have precisely such a parser, and it handles not only vanilla C++, but also a variety of vendor dialects as well. This is only possible in practice because we are using a GLR parser, IMHO).



[EDIT November 2011: We've extended our parser to handle all of C++11. GLR made that a lot easier to do. EDIT Aug 2014: Now handling all of C++17. Nothing broke or got worse, GLR is still the cat's meow.]





AFAIK C++ can't be parsed with LR because it needs infinite look ahead. So I can't think of any hacks that will make it possible to parse it with LR. Also LRE parsers sound promising.
– ybungalobill
Oct 23 '10 at 12:29





GCC used to parse C++ using Bison == LALR. You can always augment your parser with extra goo to handle the cases (lookahead, is-this-a-typename) that give you heartache. The question is "How painful a hack?" For GCC it was pretty painful, but they made it work. That doesn't mean this is recommended, which is my point about using GLR.
– Ira Baxter
Oct 23 '10 at 14:25





I don't understand how using GLR helps you with C++. If you don't know whether something is a type name or not, then you just don't know how to parser x * y; -- how does using GLR help with that?
– Mehrdad
Oct 7 '12 at 20:15


x * y;





The point is that the GLR parser will produce both parses (as "ambiguous subtree(s)" in an integrated parse "tree" (really DAG). You can resolve which of the subrees you want to keep, later, by bringing in other context information. Our C++ parser is remarkably simply regarding this issue: it doesn't try to solve the problem. That means we don't have to tangle symbol table construction with with parsing, so both our parser and the symbol table construction for C++ are individually clean and consequently much each to build and maintain.
– Ira Baxter
Oct 7 '12 at 22:07



LALR parsers merge similar states within an LR grammar to produce parser state tables that are exactly the same size as the equivalent SLR grammar, which are usually an order of magnitude smaller than pure LR parsing tables. However, for LR grammars that are too complex to be LALR, these merged states result in parser conflicts, or produce a parser that does not fully recognize the original LR grammar.



BTW, I mention a few things about this in my MLR(k) parsing table algorithm here.



Addendum



The short answer is that the LALR parsing tables are smaller, but the parser machinery is the same. A given LALR grammar will produce much larger parsing tables if all of the LR states are generated, with a lot of redundant (near-identical) states.



The LALR tables are smaller because the similar (redundant) states are merged together, effectively throwing away context/lookahead info that the separate states encode. The advantage is that you get much smaller parsing tables for the same grammar.



The drawback is that not all LR grammars can be encoded as LALR tables because more complex grammars have more complicated lookaheads, resulting in two or more states instead of a single merged state.



The main difference is that the algorithm to produce LR tables carries more info around between the transitions from state to state while the LALR algorithm does not. So the LALR algorithm cannot tell if a given merged state should really be left as two or more separate states.





+1 I like the Honalee idea. My G/L(AL)R parser generator had the seeds of something like this in it; it produces the minimal LALR machine, and then I was going to split states where there were conflicts, but I never carried through. This looks like a nice way to produce an minimal size "LR" like set of parse tables. While it won't help GLR in terms of what it can parse, it may cut the number of parallel parses that GLR has to carry and that would be useful.
– Ira Baxter
Oct 23 '10 at 2:40




Yet another answer (YAA).



The parsing algorithms for SLR(1), LALR(1) and LR(1) are identical like Ira Baxter said,

however, the parser tables may be different because of the parser-generation algorithm.



An SLR parser generator creates an LR(0) state machine and computes the look-aheads from the grammar (FIRST and FOLLOW sets). This is a simplified approach and may report conflicts that do not really exist in the LR(0) state machine.



An LALR parser generator creates an LR(0) state machine and computes the look-aheads from the LR(0) state machine (via the terminal transitions). This is a correct approach, but occasionally reports conflicts that would not exist in an LR(1) state machine.



A Canonical LR parser generator computes an LR(1) state machine and the look-aheads are already part of the LR(1) state machine. These parser tables can be very large.



A Minimal LR parser generator computes an LR(1) state machine, but merges compatible states during the process, and then computes the look-aheads from the minimal LR(1) state machine. These parser tables are the same size or slightly larger than LALR parser tables, giving the best solution.



LRSTAR 9.1 can generate Minimal LR(1), and LR(*) parsers in C++. See this diagram which shows the difference among LR parsers.



[Full disclosure: LRSTAR is my product]



Suppose a parser without a lookahead is happily parsing strings for your grammar.



Using your given example it comes across a string dc, what does it do? Does it reduce it to S, because dc is a valid string produced by this grammar? OR maybe we were trying to parse bdc because even that is an acceptable string?


dc


S


dc


bdc



As humans we know the answer is simple, we just need to remember if we had just parsed b or not. But computers are stupid :)


b



Since an SLR(1) parser had the additional power over LR(0) to perform a lookahead, we know that any amounts of lookahead cannot tell us what to do in this case; instead, we need to look back in our past. Thus comes the canonical LR parser to the rescue. It remembers the past context.



The way it remembers this context is that it disciplines itself, that whenever it will encounter a b, it will start walking on a path towards reading bdc, as one possibility. So when it sees a d it knows whether it is already walking a path.
Thus a CLR(1) parser can do things an SLR(1) parser cannot!


b


bdc


d



But now, since we had to define so many paths, the states of the machine gets very large!



So we merge same looking paths, but as expected it could give rise to problems of confusion. However, we are willing to take the risk at the cost of reducing the size.



This is your LALR(1) parser.



Now how to do it algorithmically.



When you draw the configuring sets for the above language, you will see a shift-reduce conflict in two states. To remove them you might want to consider an SLR(1), which takes decisions looking at a follow, but you would observe that it still won't be able to. Thus you would, draw the configuring sets again but this time with a restriction that whenever you calculate the closure, the additional productions being added must have strict follow(s). Refer any textbook on what should these follow be.





This is not accurate
– Paul B Mann
Mar 21 '17 at 15:39



The basic difference between the parser tables generated with SLR vs LR, is that reduce actions are based on the Follows set for SLR tables. This can be overly restrictive, ultimately causing a shift-reduce conflict.



An LR parser, on the other hand, bases reduce decisions only on the set of terminals which can actually follow the non-terminal being reduced. This set of terminals is often a proper subset of the Follows set of such a non-terminal, and therefore has less chance of conflicting with shift actions.



LR parsers are more powerful for this reason. LR parsing tables can be extremely large, however.



An LALR parser starts with the idea of building an LR parsing table, but combines generated states in a way that results in significantly less table size. The downside is that a small chance of conflicts would be introduced for some grammars that an LR table would otherwise have avoided.



LALR parsers are slightly less powerful than LR parsers, but still more powerful than SLR parsers. YACC and other such parser generators tend to use LALR for this reason.



P.S. For brevity, SLR, LALR and LR above really mean SLR(1), LALR(1), and LR(1), so one token lookahead is implied.



SLR parsers recognize a proper subset of grammars recognizable by LALR(1) parsers, which in turn recognize a proper subset of grammars recognizable by LR(1) parsers.



Each of these is constructed as a state machine, with each state representing some set of the grammar's production rules (and position in each) as it's parsing the input.



The Dragon Book example of an LALR(1) grammar that is not SLR is this:


S → L = R | R
L → * R | id
R → L



Here is one of the states for this grammar:


S → L•= R
R → L•



The indicates the position of the parser in each of the possible productions. It doesn't know which of the productions it's actually in until it reaches the end and tries to reduce.




Here, the parser could either shift an = or reduce R → L.


=


R → L



An SLR (aka LR(0)) parser would determine whether it could reduce by checking if the next input symbol is in the follow set of R (ie, the set of all terminals in the grammar that can follow R). Since = is also in this set, the SLR parser encounters a shift-reduce conflict.


R


R


=



However, an LALR(1) parser would use the set of all terminals that can follow this particular production of R, which is only $ (ie, end of input). Thus, no conflict.


$



As previous commenters have noted, LALR(1) parsers have the same number of states as SLR parsers. A lookahead propagation algorithm is used to tack lookaheads on to SLR state productions from corresponding LR(1) states. The resulting LALR(1) parser can introduce reduce-reduce conflicts not present in the LR(1) parser, but it cannot introduce shift-reduce conflicts.



In your example, the following LALR(1) state causes a shift-reduce conflict in an SLR implementation:


S → b d•a / $
A → d• / c



The symbol after / is the follow set for each production in the LALR(1) parser. In SLR, follow(A) includes a, which could also be shifted.


/


A


a



One simple answer is that all LR(1) grammars are LALR(1) grammars.
Compared to LALR(1), LR(1) has more states in the associated finite-state machine (more than double the states). And that is the main reason LALR(1) grammars require more code to detect syntax errors than LR(1) grammars.
And one more important thing to know regarding these two grammars is that in LR(1) grammars we might have less reduce/reduce conflicts. But in LALR(1) there is more possibility of reduce/reduce conflicts.






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

Firebase Auth - with Email and Password - Check user already registered

Dynamically update html content plain JS

Creating a leaderboard in HTML/JS