So far, we have described how to take an input string, and produce a representation. But, obviously, for most applications, the reverse process is also necessary. Equally obviously, how hard this is depends on where you start from. Generating a string from a constituent structure representation like those above is almost trivial. At worst one needs to do something to the words to get the correct form (e.g. to get clean, not cleans in The user should clean the printer regularly). For the rest, it is simply a matter of `forgetting' what structure there is (and perhaps the not-so-trivial matter of arranging punctuation).
Figure: Building a Representation of Grammatical Relations
Starting from a representation of grammatical relations , or a semantic representation is harder.
If the relations between syntactic , grammatical relation structures, and semantic structures are described by means of explicit rules, then one approach is to use those rules in the same way as we described for parsing , but `in reverse' --- that is with the part of the rule written after the interpreted as the lhs. Things are not quite so straightforward when information about grammatical relation s, and/or semantics is packed into the constituent structure rules.
One possibility is to have a completely separate set of procedures for producing sentences from semantic or grammatical relation structures, without going through the constituent structure stage (for example, one would need a rule that puts HEAD, SUBJECT, and OBJECT into the normal word order for English, depending on whether the sentence was active or passive, interrogative or declarative). This has attractions, in particular, it may be that one does not want to be able to generate exactly the sentences one can parse (one may want one's parser to accept stylistically rather bad sentences, which one would not want to produce, for example). However, the disadvantage is that one will end up describing again most, if not all, of the knowledge that is contained in the grammar which is used for parsing .
A naive (and utterly impractical) approach would be to simply apply constituent structure rules at random, until a structure was produced that matched the grammatical relation structure that is input to generation . A useful variation of this is to start with the whole input structure, and take all the rules for the category S (assuming one expects the structure to represent a sentence), and to compare the grammatical relation structure each of these rules produces with the input structure. If the structure produced by a particular rule matches the input structure, then build a partial tree with this rule, and mark each of these parts as belonging to that tree. For example, given the rule for S above, one could take the grammatical relation structure of a sentence like The user has cleaned the printer and begin to make a phrase structure tree , as is illustrated in Figure .
Figure: Generation from a Grammatical Relation Structure 1
One can see that a partial constituent structure tree has been created, whose nodes are linked to parts of the grammatical relation structure (a convention is assumed here whereby everything not explicitly mentioned in the rule is associated with the HEAD element). Now all that is necessary is to do the same thing to all the parts of the Grammatical relation structure, attaching the partial trees that have been constructed in the appropriate places. This is illustrated in Figure . Again, there are many refinements and details missed out here, but again, all that matters is the basic picture.
Figure: Generation from a Grammatical Relation Structure 2