Eli   Documents Get Eli: Translator Construction Made Easy at SourceForge.net.
    Fast, secure and Free Open Source software downloads

General Information

 o Eli: Translator Construction Made Easy
 o Global Index
 o Frequently Asked Questions
 o Typical Eli Usage Errors


 o Quick Reference Card
 o Guide For new Eli Users
 o Release Notes of Eli
 o Tutorial on Name Analysis
 o Tutorial on Type Analysis
 o Typical Eli Usage Errors

Reference Manuals

 o User Interface
 o Eli products and parameters
 o LIDO Reference Manual
 o Typical Eli Usage Errors


 o Eli library routines
 o Specification Module Library

Translation Tasks

 o Lexical analysis specification
 o Syntactic Analysis Manual
 o Computation in Trees


 o LIGA Control Language
 o Debugging Information for LIDO
 o Graphical ORder TOol

 o FunnelWeb User's Manual

 o Pattern-based Text Generator
 o Property Definition Language
 o Operator Identification Language
 o Tree Grammar Specification Language
 o Command Line Processing
 o COLA Options Reference Manual

 o Generating Unparsing Code

 o Monitoring a Processor's Execution


 o System Administration Guide

Mail Home

New Features of Eli Version 4.3

Previous Chapter Next Chapter Table of Contents

Lexical analysis

There have been several additions involving auxiliary scanners and token processors: a new auxiliary scanner for reporting token errors, a header file defining the built-in auxiliary scanners and token processors, and a consolidation of NUL character processing.

Detecting lexical errors explicitly

Normally the scanner reports a lexical error when an input character cannot be the first character of any basic symbol. In other words, an error is signalled when the processor knows nothing about an input character. Sometimes, however, it is appropriate to recognize a specifc sequence of input characters as an invalid token.

A new auxiliary scanner called lexerr handles this situation. It reports that the scanned character sequence is not a token. It does not alter the initial classification, and does not compute a value. There is no source file for this token processor; it is a component of the scanner itself, but its interface is exported so that it can be used by other modules.

Scanning to, but not including, a newline

The auxiliary scanner auxNoEOL extends the character sequence matched by the associated pattern to the end of the current line, but does not include the terminating newline. It is useful in situations where a token must begin at the beginning of a line, and therefore has a regular expression whose first character is the newline. A token preceding token using auxEol to extend to the end of a line would absorb the newline, thus making it impossible to recognize the token beginning at the beginning of the next line.

Auxiliary scanner and token processor definitions

The header file `$elipkg/Scan/ScanProc.h', containing definitions of all of the auxiliary scanners available in the library, has been added. It should be included by any C program that uses auxiliary scanners from the library.

Processing NUL characters during lexical analysis

All of the auxiliary scanners that scan over a newline now invoke auxNUL when they detect an ASCII NUL just beyond that newline. An ASCII NUL just beyond a newline character signals the end of the current source buffer, and an operation is needed to refill the buffer. By invoking auxNUL whenever this condition arises, we have centralized the operation of refilling the buffer at one point. This means that if a specification requires some special action whenever the buffer is refilled, it can override auxNUL.

We strongly recommend that users adhere to this convention when they must write an auxiliary scanner that must scan over a newline. Here is a typical code sequence for such a scanner. The variable p is the scan pointer and start points to the beginning of the current token:

if (*p == '\0') {
  int current = p - start;
  TokenStart = start = auxNUL(start, current);
  p = start + current;
  StartLine = p - 1;
  if (*p == '\0') {
    /* Code to deal appropriately with end-of-file.
     * Some of the possibilities are:
     *   1. Output an error report and return p
     *   2. Simply return p
     *   3. Move to another file and continue

Previous Chapter Next Chapter Table of Contents