The last section indicated that tools were available to generate all of the modules of Figure 1. It also pointed out that these modules depend upon one another in various ways. For example, the structural analysis module generated from a regular expression and a context-free grammar must invoke the functions exported by the tree construction module generated from an attribute grammar. In effect, there is a large amount of ``glue'' holding the modules of Figure 1 together.
If we treat the construction of the compiler as a single problem, rather than as a collection of module generation problems, then the nature of the interactions among the modules can often be deduced from their specifications. For example, the context-free grammar for C contains a rule it_statement: 'while' '(' exp ')' statement, and it is at precisely the reduction of this rule that the generated parser should invoke MkWhile. If we were using YACC to generate the parser module, we would write a a rule something like this in the context-free grammar:
it_statement: WHILE '(' exp ')' statement {$$ = MkWhile($3,$5);}
A tool that had access to both the context-free grammar and the attribute grammar could easily deduce the action {$$ = MkWhile($3,$5);} by comparing the signatures of the two rules, and hence absolve the user of the need to add that action to the grammar.
Another example of the advantage of looking at the problem as a whole comes from the observation that no computations need be carried out at many of the nodes of the tree. Suppose that the user simply did not write rules for nodes at which no computation is required. The resulting attribute grammar would be a collection of unrelated fragments, and would be rejected by LIGA. A tool with access to both the context-free grammar and the attribute grammar could supply the missing rules by comparing the two. This represents a considerable saving for the user in a large grammar.
Domain-specific programming environments are systems that embody an understanding of certain classes of problems. They reduce the burden on the programmer by taking over standard tasks needed to solve problems in those classes. The programmer is then free to concentrate on the aspects of the problem that distinguish one instance from another, rather than having to deal with those that are common.
Eli is a domain-specific programming environment that embodies an understanding of the class of problems that involve analyzing text and then translating or interpreting it [GLH+92]. Those problems are solved by processors having the structure shown in Figure 1. Thus Eli contains tools that accept declarative specifications and generate modules to implement Figure 1. It also contains tools that analyze the relationships among those specifications and provide the ``glue'' to hold the modules together. Planner tools decide when certain of the blocks of Figure 1 are unnecessary, and change the manufacturing process accordingly.
An Eli user is unaware of the individual tools and their interactions. The user requests a product, such as an executable version of the compiler, and provides a set of specifications (which may be a mixture of declarative and operational). Eli examines the kinds of specifications, determines how to manufacture the desired product, and then does so. It maintains a cache of intermediate and final products so that the cost of manufacture after a change in specifications or requested product is minimized.