Grammatical Inference

Colin de la Higuera

Mentioned 1

The problem of inducing, learning or inferring grammars has been studied for decades, but only in recent years has grammatical inference emerged as an independent field with connections to many scientific disciplines, including bio-informatics, computational linguistics and pattern recognition. This book meets the need for a comprehensive and unified summary of the basic techniques and results, suitable for researchers working in these various areas. In Part I, the objects of use for grammatical inference are studied in detail: strings and their topology, automata and grammars, whether probabilistic or not. Part II carefully explores the main questions in the field: What does learning mean? How can we associate complexity theory with learning? In Part III the author describes a number of techniques and algorithms that allow us to learn from text, from an informant, or through interaction with the environment. These concern automata, grammars, rewriting systems, pattern languages or transducers.

More on Amazon.com

Mentioned in questions and answers.

I'm working on analyzing a large public dataset with lots of verbose human-readable strings that were clearly generated by some regular (in the formal language theory sense) grammar.

It's not too hard to look at sets of these strings one by one to see the patterns; unfortunately, there's about 24,000 of these unique strings broken up into 33 categories and 1714 subcategories, so it's somewhat painful to do this manually.

Basically, I'm looking for an existing algorithm (preferably with an existing reference implementation) to take an arbitrary list of strings and try to infer some minimal (for some reasonable definition of minimal) spanning set of regular expressions that can be used to generate them (i.e. infer a regular grammar from a finite set of strings from the language generated by that grammar).

I've considered doing repeated greedy longest common substring elimination, but that only goes so far because it won't collapse anything but exact matches, so won't detect, say, a common pattern of varying numerical strings at a particular position in the grammar.

Brute forcing anything that doesn't fall out of common substring elimination is possible, but probably computationally unfeasible. (Furthermore, I've thought about it and there might be a "phase ordering" and/or "local minimum" issue with substring elimination, since you might make a greedy substring match that ends up forcing the final grammar to be less compressed/minimal even though it appears to be the best reduction initially).

Yes, it turns out this does exist; what is required is what is known academically as a DFA Learning algorithm, examples of which include:

  • Angluin's L*
  • L* (adding counter-examples to columns)
  • Kearns / Vazirani
  • Rivest / Schapire
  • NL*
  • Regular positive negative inference (RPNI)
  • DeLeTe2
  • Biermann & Feldman's algorithm
  • Biermann & Feldman's algorithm (using SAT-solving)

Source for the above is libalf, an open-source automata learning algorithm framework in C++; descriptions of at least some of these algorithms can be found in this textbook, among others. There are also implementations of grammatical inference algorithms (including DFA learning) in gitoolbox for MATLAB.

Since this question has come up before and has not been satisfactorily answered in the past, I am in the process of evaluating these algorithms and will update will more information about how useful they are, unless someone with more expertise in the area does first (which is preferable).

NOTE: I am accepting my own answer for now but will gladly accept a better one if someone can provide one.

FURTHER NOTE: I've decided to go with the route of using custom code, since using a generic algorithm turns out to be a bit overkill for the data I'm working with. I'm leaving this answer here in case someone else needs it, and will update if I ever do evaluate these.