Introduction to Autolexical Grammar

by Eric Schiller, Linguistics Unlimited

This page was last modified on 05/07/02.

© by Eric Schiller. All Rights Reserved. 

This document may be freely distributed, if unaltered and complete, in HTML format. 

The preferred reference for this document is: 

Schiller, Eric (1997-99). Introduction to Autolexical Grammar. Published on the Internet by Linguistics Unlimited, Moss Beach CA.

Correspondence regarding this document may be sent to the author at: 

Linguistics Unlimited mailto:linguist@chessworks.com

Mail: Post Office Box 1048, El Granada CA 94018-1048 

Important notice: In late 1999 or early 2000 this document will be replaced by an XML version which will require an XML enabled browser. Recent versions of Microsoft and Netscape clients will work.

Contents





















bullet Revision History
bullet Conventions
bullet General Introduction
bullet What is Autolexical Syntax?
bullet History of the theory
bullet Geographical Survey of the Dimensions
bullet Hierarchies, Categories and Features
bullet Lexicon
bullet Interface
bullet Dimensions
bullet Syntax (Introduction)
bullet Syntax (Details)
bullet Logico-Semantics (Introduction)
bullet Logico-Semantics (Details
bullet Morphosyntax (Introduction)
bullet Morphosyntax (Details)
bullet Morphophonology (Introduction)
bullet Morphophonology (Details)
bullet Discourse (Introduction)
bullet Discourse (Details)



bullet Objects and Methods for Autolexical Analysis
bullet References
bullet Credits

Revision History

Version 1.0 12-Feb-96 

Version 2.0 17-Jun-96 

Version 2.5 17-May-97 

Version 2.6 21-May-97 (Mostly cosmetic changes) 

Version 3.0 09-Jun-97 (Major revision to object structures and lexicon, now  standalone documents. Link is where each section used to be.) 

Version 3.5 1-Sep-97 (Additional examples added). 

Version 3.7 29-Jan-98 (Mostly cosmetic changes)

Version 4.0  4-Oct-98 (Additional navigation, revision in detailed logico-semantics.)

Version 4.1 1-Feb-99 (Major changes to terminology on features and  hierarchies. See the new Attributes and Values. References to the XML Implementation.)

Version 4.5 1-Aug-99. (Just a bit of cleanup.)

Version 5.0 1-Oct-99. (Fixed navigational problems. Illustrative graphics added.)

Conventions

All linguistic examples are in italics

References to published work is in Author (Date) format. Further information can be found in the References section. 

Introduction

The purpose of this document is to provide general background to the theoretical framework known as Autolexical Grammar (formerly Autolexical Syntax). The material covers the development of the theory from 1983 through its object-oriented implementation of January 1996 to its current form, specifically the Autolexical Grammar Application Environment (ALGAE). Some of the material below is rather technical, and those without a solid background in linguistics may prefer to get the flavor of the theory by examining the ongoing Mythbusters series, where the framework is applied to problems in the teaching and understanding of English grammar. 

Each section is a sort of position paper on various issues in linguistic theory, together with the practical consequences entailed by adopting these positions. The reader will, I hope, obtain not only a clear picture of how the framework functions, but will also acquire all the tools necessary to conduct analysis of linguistic data in all of its variety and splendor. The opinions expressed herein are strictly my own, and to what extent they are shared among the majority of those who do autolexical analysis I cannot say. That there are local and regional "dialects" of the framework should not be surprising. Many interesting paths are still being explored. But I do not want the reader side tracked by these variations, since one of the two primary goals of this project is to provide functioning tools. 

This document does not take a conservative approach. Some of the most interesting recent developments are radical in nature, including the revised X-bar system proposed in Schiller & Need 1992 and the rejection of generalized quantifiers which is a development not yet documented in print. If the radical ideas prove helpful either theoretically, descriptively, or both, they will be incorporated here, with appropriate warning signs posted. 

Many of the examples presented here will be fanciful or humorous. Since the entire Generative Semantics movement or the 1960s-70s, with a major node at the University of Chicago, has been criticized from time to time for this flippant attitude, let me stress that this is not a mere stylistic affectation. It is simply much easier to remember examples such as "Spiro conjectures Exlax" or "My toothbrush is pregnant" (both from the late, great, Jim McCawley) than it is to remember more mundane sentences. Perhaps the clear prejudice toward certain political parties and philosophies is less justifiable, but the advantage of electronic transmission and global search and replace routines is that the reader can always strike the offending material. I am not being paid to write this and I reserve the right to make the experience pleasurable! 

This informality will also pervade references to individuals throughout the document. James D. McCawley is cited as Jim McCawley more often than not, especially by American linguists. And from time to time I refer to my friend, advisor, and former teacher Jerrold M. Sadock as Jerry Sadock, even in print. I have adopted a policy of using the full academic name of each scholar in the reference list, but throughout the text I have tended to use the name most commonly used by linguists when attending conferences or holding discussions in a bar. Another University of Chicago affectation for which I will surely be punished in the afterlife. 

Finally, this document is intended for anyone who is sufficiently interested in linguistic theory to have acquired some background, at least the equivalent of undergraduate introductory material and an undergraduate syntax course. Those who are inclined to skip over the more fundamental material might want to keep in mind that some of the most obvious principles and procedures mentioned are not necessarily implemented in even the most sophisticated of frameworks. 

What is Autolexical Grammar?

Once in a while you get shown the light in the strangest of places if you look at it right. --Robert Hunter.

Autolexical Grammar is a variety of non-derivational, non-transformational generative grammar in which fully autonomous systems of rules characterize various dimensions of linguistic structure. This statement covers a lot of ground, so let's break it down. 

The name of the theory is Autolexical Grammar. The 'auto' reflects the fact that this framework employs a set of autonomous components. The theory requires on information which is stored in a lexicon, which contains words (and other items such as affixes, constructions and idioms) together with properties of these words. A lexical entry can be described in contemporary computer jargon as an object in the object-oriented paradigm. Grammar is used in the title in the generic sense, referring to a set of rules which govern how units are combined into larger units. Each of the autolexical (I will capitalize Autolexical and Grammar only when referring to the theory as a proper noun, and when specifically referring to the particular form of the framework as used by me or at the University of Chicago. There are many scholars employing autolexical ideas in their work. The lowercase "autolexical" is meant to have wide scope over a variety of implementations.) components (for example syntax, morphology, semantics) has its own set of rules (syntax). 

The framework is non-derivational, in that each component, usually called dimension, is static, with no processes applying to create new forms from underlying forms. Language is viewed as the intersection of a set of independent representations. There are no rewrite rules. Instead, each component consists of a structural description of the information relevant to that dimension. We will have much more to say on this later. There are no movement rules, and thus no transformations. Passive, therefore, is not a derivational transformation in our approach where items are manipulated. Instead, it is a simply a generalization over the grammar of some languages that there are pairs of sentences which are truth conditionally equivalent (Sadock 1992). But as we shall see, an autolexical approach can provide some insight as to why passive sentences exist, a topic rarely treated in traditional transformational grammar. This will be taken up in our discussion of the discourse dimension

Autolexical Syntax is a generative approach to grammar, with an object-oriented framework. The framework provides a system of objects and methods which generates and infinite number of strings of words which constitute the expressions of the language. In order to determine which strings of words are well-formed expressions, grammaticality judgments are employed. Thus an autolexical grammar of English would be expected to generate the string Clinton does not respect Newt. but not *not does respect Clinton Newt

It seems at times that some linguists place a great deal of importance on whether an expression is grammatical or not, to the point that objections by native speakers whose judgment differs are either ignored or considered simply wrong. A more reasonable approach is to accept the fact that if a number of speakers agree on a judgment, that there exists a dialect which includes (or excludes) the expression in question. Therefore if one wants a grammar to be descriptively adequate it is necessary to insure that the framework permits an analysis of each of the alternatives. In some approaches, this is implemented via parameters which apply to individual languages or dialects, though often this seems to be more a way of marking exceptions to rules than an explanation of how languages systematically differ. The autolexical approach does not directly instantiate parametric variation, though parameters can be developed to group together clusters of properties which pattern together in the languages of the world. However, most autolexicalists feel that too little accurate descriptive work has been done to date, and that parametric analysis is probably premature until the properties of the thousands of attested languages are described in more detail. 

In recent work, the binary view of grammaticality has been replaced, following conventions developed by Ross, with sliding scales of judgment, in which a set of strings can be placed in a hierarchical order of acceptability. Full grammaticality and ungrammaticality have become simply terminal nodes on this scale. 

Indeed, autolexical analysis is still a developing enterprise. The number and nature of the dimensions in the framework is still under investigation, and may be for some time. On the one hand, one does not want to reduce linguistic phenomena in an ad hoc manner, but on the other hand a very large set of components will prove unwieldy. A dimension is a perspective on a linguistic phenomenon. It is a way of looking at the phenomenon from a particular point of view. At first, it was assumed that there were three dimensions: syntax, logical semantics and morphology. It soon became clear that these would not suffice to describe all of the phenomena which are encoded in natural language by grammatical means. If any language has a specific formal device which affects the interpretation of an utterance, we want our theory to be able to describe and explain it. So at the very least, some sort of discourse component is needed, as is some sort of link to phonological information. Some languages use specific formal devices to mark illocutionary force, and these must be handled as well. Throughout this document I will be adopting the 5-dimension framework which I developed early in 1996. 

Each dimension exists on its own. The syntactic dimension does not concern itself with morphology or semantics. It is merely a set of rules which determine whether an expression of the language is well-formed in terms of the constituent structure of its syntactic formatives. The logico-semantic component doesn't care about the syntactic structure. It has its own representation which determines whether an expression is logically well-formed. Transitivity in English can be viewed as purely a matter of the logico-semantics. A verb that requires a subject and an object demands that these be provided in the semantic component (throughout the discussion logico-semantic and semantic will be used interchangeably, but the reference is always to the former. Questions of real-world meaning are largely extra-grammatical, if no less important, and are part of the discourse dimension.) If one of these items is missing, the sentence is either ungrammatical, or the missing argument may be supplied by a rule. In English, eat is a transitive verb. But we can say Uncle Fenster ate piggishly and it is still grammatical. The syntax of eat is simply that it is a verb. We will see below how this separation of powers allows us to easily account for the behavior of such words as seem, which has the syntax of a verb but the semantics of an adverb. 

The dimensions are coordinated by means of the lexicon and an interface principle (Generalized Interface Principle, Sadock & Schiller 1993) that limit the degree of structural discrepancy between the autonomous representations given by the various modular grammars. Each component contains just a single representation, so there is neither movement nor deletion in autolexical syntax. Elements of each dimension are "exposed" to the interface. The effects of movement are treated as discrepancies of order or constituency between two autonomous representation, and the effects of deletion are achieved by allowing an element to be present in one dimension with no counterpart on some other dimension. Such discrepancies are discussed in virtually every paper written in the framework, and we will provide a number of examples later on. 

Autolexical Syntax differs from monostratal theories such as Generalized Phrase Structure Grammar, Head-driven Phrase Structure Grammar, Montague Grammar, and Categorial Grammar in recognizing more than one level of structure, and is similar in this respect to transformational grammar. In the standard theory of transformational grammar, however, the only level that was directly characterized was deep structure, whereas each level of representation is directly characterized by an explicit grammar in Autolexical theory. Most work in the Government and Binding Framework and its descendants has assumed that all levels are topologically identical, a view codified in the Projection Principle (prior to its recent demise). By contrast, in Autolexical Syntax every level has its own system of tactics. and in this respect the approach is similar to that of Stratificational Grammar

A brief history

The conceptual basis for an autolexical approach was first sketched out on the back of a napkin at a pizza parlor in College Park, Maryland in the summer of 1992. The framework for Autolexical syntax grew out of work on the modular nature of grammar by Jerry Sadock (1983) and was first introduced at a talk given at the University of Chicago in May, 1984. The first major publication to apply the approach was an article in Language by Sadock (1985), which was concerned with the interaction of morphology and syntax in West Greenlandic. The next few years saw presentations by Steven G. Lapointe (1987), Anthony Woodbury (1987) and a number of other scholars The first doctoral dissertation involving the framework was "An Autolexical Account of Subordinating Serial Verb Constructions" by Eric Schiller (1991), followed by Randolph Graczyk (1991) and Jeff Leer (1991). It is also employed in dissertations in progress by Tista Bagchi, David Kathman, Barbara Need and Robinson Schneider. 

This early work is documented in Schiller, Steinberg & Need (1995). The next major developments were a refinement of the syntactic component (Schiller & Need 1992), and a reduction of interfaces principles to a single unifying principle, the Generalized Interface Principle (Sadock & Schiller 1993). 

By 1997, my own version of the framework (ALGAE) had taken on an object-oriented structure, with a reduction of the number of dimensions from 7 to 5. The number of hierarchies represented within the dimensions has grown to accommodate all of the linguistic phenomena we have tried to deal with. 

Geographical Survey

The terrain of Autolexical Syntax is quite vast, as the theory is intended to cover all of the information which is grammaticized in language, from phonological form through discourse properties of lexical items. 

There are three broad types of information which form the framework of Autolexical syntax. The hierarchies contain values of the linguistic features, often members of hierarchies. They are organized into dimensions largely for convenience. The interface is the intersection of the dimensions, where representations are compared and the lexicon contains the information about words and other linguistic elements. Hierarchies pervade all the dimensions and play an important role in the interface. 

Hierarchies, Categories and Features

In earlier versions of autolexical grammar the basic elements of the framework were categories and rules which operated on those categories, supplemented by features which were checked at the interface. The introduction of hierarchies into the model changes the view of the atomic elements of the theory. We can now distinguished between categories, which are unordered sets of atomic elements called attributes, and hierarchies, which are ordered sets. Most of what we have heretofore described as features are now attributes which participate in specific hierarchies. So instead of treating case relations as semantic features, as I proposed in early, unpublished autolexical work, we will now have a separate hierarchy of case attributes, which can interact with a number of dimensions. The Paninian view of case is therefore not only retained, but more easily implemented. Cross-linguistic tendencies involving case (such as that nominative case tends to be unmarked morphologically) can be expressed by a case-marking hierarchy in the morphosyntax, while the thematic relations expressed by the cases are relevant to the logico-semantics. Binary features are now simply members of a very small hierarchy with two members in an ordered set. 

It is important to note that hierarchies, like categorial sets, must contain attributes of the same type. If the elements of a hierarchy are units of a particular dimension, the hierarchy must contain only elements of that dimension. It would be inappropriate to invoke the notion of subject in the morphosyntax, for example, because there is nothing which corresponds to a notional subject there. 

Employing a set of conflicting hierarchies, rather than binary parametric settings, allows us to more easily capture generalizations about language. For example, consider the following data on fundamental word order taken from Russell Thompson's Basic Word Order: Functional Principles (Croom Helm 1986): 


Distribution of fundamental constituent order
SOV SVO VSO VOS OVS OSV 
44.78 41.79 9.20 2.99 1.24 0.00 

It is clear that placing the subject before the object+verb (in either order) is the preferred order. But we would not want to postulate an ad-hoc hierarchy such as (SOV | SVO) > VSO > (VOS | OVS) > OSV because this would be uninformative. The first question then is on which dimension to locate the hierarchy. If we assume that every language has a constituent VP (although this is a controversial position in some frameworks, it is correct here because Syntax is concerned primarily with dominance relations, while Surfotax handles much, and perhaps all, of linear precedence and can involve mismatches with the Syntax) we can reformulate the hierarchy in the Syntax as: 

An NP which c-commands another NP also precedes it. 

This hierarchical statement can be applied within the Syntax or as an interface restraint on Syntax/Surfotax pairings. The existence of exceptions, usually fatal to a linguistic analysis, can be explained by the actions of other ordering principles which over-ride that rule, not only in the case of languages where the VP precedes the NP, rare though they are, but even in languages which otherwise obey it. For example, topicalization of direct objects, a frequent phenomenon, is motivated by the new information/old information hierarchy, and this discourse factor is more powerful than the purely syntactic/surfotactic restraints. 

Does this mean that in Autolexical Grammar everything is permitted, with a jungle of hierarchies to navigate, any one of which can be pulled out and forced into use to support an otherwise ad-hoc analysis? Not really. Once we have been able to elucidate the various hierarchies and tie them to the appropriate dimensions, we shouldn't be able to get away with disingenuous solutions. Thus the investigation of these elements is a necessary precursor to full-fledged linguistic analyses. The last thing we need is a Rube Goldberg machine like the old Government and Binding paradigm, where every tweak to fix one analysis sent hundreds of others into oblivion. Our investigations are only beginning, using analytical tools honed in centuries of linguistic research applied now to a new framework which is capable of incorporating them. 

Lexicon (Overview)

Autolexical Syntax is a lexicalist approach. The lexical entry, which can be a word, an affix, an idiom or a construction, is the fundamental building block of the framework. In the present implementation, this is an object which has properties that specify the behavior of each lexical item (word, affix, idiom etc.) on each dimension. For details, click here

Interface (Overview)

The interface compares the representations of the individual dimensions and checks them for discrepancies in linear order or constituent structure. If such discrepancies are found, the expression is cleared as grammatical only if the violations are within tolerable limits. 

The limits of the discrepancy are set by the Generalized Interface Principle (Sadock & Schiller 1993): 

Paradigmatic and syntagmatic features at all levels should correspond as closely as possible.

The "as possible" part of the statements refers to the fact that in many cases the lexical requirements of a lexeme make it impossible to match perfectly. Clitics are good examples of such items, as are infixes, which automatically generate a mismatch between the morphophonological form and the morphosyntactic form. 

Dimensions (Overview)

A dimension is a particular perspective on the organization of information contained in a linguistic expression. Linguistic expressions can be analyzed structurally in a number of ways. Linguists often choose to concentrate on a single structure, e.g. phonological form. In the autolexical approach it is necessary to consider each expression from a variety of perspectives, and also to consider combinations of perspectives. Sometimes the terms "component", "level", or "module" are used instead of dimension. 

Each dimension has a list of categories which comprise the fundamental building blocks of the dimension. In syntax, these include nouns, verbs, prepositions, etc. Each dimension also has the set of features which may apply to the categories. An example of a feature is a marker of gender or number. One could argue that the category is itself nothing but a bundle of features, an approach taken in many frameworks, but I prefer to distinguish the two. 

The grammatical rules that apply in the dimension are also specified. An example in the syntax of English is that a complete sentence must contain a verb. 

In ALGAE, my own implementation of Autolexical Grammar, there are five dimensions. 

The dimensions are: 





bullet Syntax 
bullet Logico-Semantics 
bullet Morphosyntax 
bullet Morphophonology 
bullet Discourse 

Syntax (Introduction)

Syntax is a level of representation which is concerned with the organization of units which combine to form sentences. These units are traditionally equated with words, but we will see that many affixes also play a direct role in syntax. One example from English is the possessive suffix `s. There are also words which are part of an utterance that do not have any function in the syntax. This group includes interjections such as uh, which is unrestricted syntactically and can appear between any two words. The general approach used in constructing our syntactic analysis is derived from X-bar theory (Jackendoff 1977) but in a much simpler form. 

Because many of the functions which are built into standard X-bar theory belong on other dimensions of autolexical theory, especially in morphosyntax, it is possible to greatly streamline the system. 

Logico-Semantics (Introduction)

This dimension represents the formal, combinatoric semantics and is similar to what some linguists call "logical form". But it is completely autonomous from the syntax, and should not be confused with the Government and Binding notion of "LF". Here units are categorized by logical type. The list of types includes formula, property, operator, quantifier and variable. The rules of the logico-semantics are context-free phrase structure rules without linear precedence. Hierarchies relevant to this dimension include case relations. 

It is assumed that the logico-semantic dimension is universal among human languages, though the framework in no way requires such universality. 

Morphosyntax (Introduction)

On this dimension we represent the internal structure of words. Units can be specified with regard to three types of information. The first type is the head feature, if any. In English, we have words which can be described as nominal, verbal, adjectival, adverbial etc. depending on the type of operations that can be performed on them. (I use the -al suffixed forms to avoid confusion with the syntactic concepts of noun, verb, adjective etc.) So, for example, a nominal stem can take the plural marker -s. The second piece of required information is the status of the item as a root, a stem, or an affix. The final entry concerns the subcategorization. An example is the suffix -ness, which attaches to adjectival stems and forms nominals. 

The grammar of the morphosyntax is either finite-state or context-free, with the latter more generally accepted among autolexicalists. The crucial question is whether compounding and unlimited center-embedding should be handled directly by the morphosyntactic component. I have suggested that reduplication, for example, is realized at the morphophonological level rather than the morphosyntactic level, where it is merely an affix. 

Morphophonology (Introduction)

The morphosyntactic dimension does not represent some aspects of words which are closer to phonology in that they apply to aspects of a word such a syllable structure. In recent work, the placement of an affix as a prefix, infix, or suffix has been considered a matter of morphophonology, and not morphosyntax. The division of morphology into two units is now generally accepted, though since most of the attention has been paid to morphosyntax the morphophonological level is less well developed. 

The morphophonology is templatic in nature. Here the location of an affix is specified as well as other word-internal phenomena. How much of pure phonology is incorporated into this dimension remains an open question. Under the most recent revisions to the framework, the problem of phonology may be reducible to the interaction of various hierarchies of phonological information. 

Discourse (Introduction)

There are a number of aspects of an utterance which are not encoded by the syntactic dimension, but which affect the ordering of elements. Among these, in some languages, are topic/comment structure, focus, and interrogativity. In other words, discourse deals with the communicative aspect of language, often on a fairly subtle level. The view of the discourse dimension presented here is innovative in that it includes two types of information which have previously been granted independent dimensions, known as illocution and surfotax. In addition, I have added a mechanism for dealing with anaphoric reference and another for intonation. 

The organization of the discourse dimension now has four sub-components, each of which is linked to another dimension via the interface. This is represented in the following graphic: 

Discussion of the subcomponents of the discourse dimension can be found in the more detailed discussion of this component. 

Jerry Sadock introduced a dimension of surfotax to account for linear precedence restrictions which cannot be attributed to either morphosyntax or syntax. The need for this level depends greatly on the internal details of those two levels. More than anything else, surfotax reflects the respect with which autolexicalists should treat the data. If the existing mechanisms of the framework cannot produce an adequate analysis in the existing dimensions, the data cannot merely be dismissed as aberrant. In all the cases I have seen, surfotax represents the actual spoken or written order of lexical items, and can be treated as the level of terminal nodes in the discourse dimension. 

The discourse dimension also reconciles pronominal and other anaphoric reference, making sure that gender and number are used appropriately. These and other details will be explored in a later section. 

Syntax (Detailed)

The syntactic dimension contains a representation of the constituent structure and linear order of an expression. The terminal nodes need not be full words, but can consist of any unit which bears a syntactic category label. The terminal syntactic node labels are drawn from the lexicon. The allowable structures of a language are determined by a set of rules, some of which may be members of a small set of universals. Many syntactic rules are language specific, and autolexicalists assume no fixed set of rules which are parametrically determined. See Constituent Structure and Categories for more details. 

Constituent Structure

X-bar theory was developed in the late 1970s, in large part due to the influence of Ray Jackendoff, whose 1977 book was the focus of a great deal of attention and debate in the ensuing decade. The "X" in X-bar stands for any syntactic head category, and the "bar" refers to levels of projections of that head. So if we consider a head noun, N, then there will be projections N-bar, N-doublebar, etc., sometimes notated with primes (N, N', N") or with actual bars resting on top of the N. We will use digits to indicate bar level in this document, i.e., N0, N1, N2. 

In theory, the application of X-bar theory should rise to tight formal accounts of syntactic structure. Unfortunately, not all frameworks bother to give an explicit account of the rules and regulations of the X-bar theory they employ. In autolexical work, a tightly constrained X-bar theory is employed. 

Geoff Pullum has frequently criticized syntacticians for their failure to explicitly describe the details of the x-bar system they employ. All too often we find theoretical discussions involving phrases such as "assuming some theory of x-bar syntax". The X-bar characteristics of our syntactic framework are discussed in Schiller & Need (1992). Those uninterested in the gory details are encouraged to skip that section. 

X-Bar Syntax (Details)

Need & Schiller (1992) discusses the X-bar properties of the autolexical approach: 

"Pullum (1985) and Kornai & Pullum (1990) set forth explicit parameters for X-bar systems. All definitions presented below are taken verbatim from Kornai & Pullum (1990). 

It is not clear to us whether our system can be said to strictly adhere to Lexicality (a Lexicality-observing CFG observes Succession iff every rule rewriting some nonterminal Xn has a daughter labeled Xn-1), since there are non-terminal categories that are unspecified for bar level, e.g. M[N1>>N2] (= Determiner). But if we assume that descriptions of bar levels could include such formulae as [1>>2], then we might be said to be in compliance, and in any event our schema does not create any more violations than the slash categories of GPSG. 

Function categories such as [N1>>N1] clearly violate the strong version of Succession defined above but comply with Weak Succession, where the condition is "that the head of a phrase Xk is that daughter which (i) is a projection of X, (ii) has bar level j equal to or less than k, and (iii) has no other daughter that is a projection of X with fewer than j bars." (Kornai & Pullum 1990:29). Clearly any syntactic treatment of coordination is going to require the less stringent version of Succession. As noted above we do not consider prenominal adjectives to be headed by a major category A, and thus do not have a structure wherein an N-bar dominates a maximal projection of an adjective as well as another N-bar. Note that none of the proposed categories reduces the bar level, i.e. there is no category [N2>>N1]. 

As implemented, we do observe Uniformity (the maximum possible bar level is the same for every preterminal) on the syntactic dimension, where there are three bar levels for every head (0,1,2). We achieve this by treating the sentential unit as V2 (as in GPSG) and the traditional S-bar as P2 (following Emonds (1986)). It should be noted that some autolexicalists prefer to retain a COMP (C) head with a projection to C1 for subordinate clauses. I see nothing in principle to argue against this approach save that it makes the system less symmetric, but symmetry, while appealing, is not a desideratum of the framework. The question of A2 is more problematic, but we think that one can treat comparative as clauses and than clauses as instances of this category, though this is a matter we will not go into further here. Of course, as Kornai & Pullum note, a null projection of A1 could always be invoked should our analysis prove inadequate. 

Centrality (a Lexicality-observing CFG observes Centrality iff the start symbol is the maximal projection of a distinguished preterminal) demands null categories in order to account for attested language data in a wide variety of languages (Russian, Khmer, various Creoles and many others) where, if V2 is assumed as the start symbol, verbless sentences fail the test. Our abhorrence of empty categories on dimensions other than logico-semantics, where a reflexive operator seems to be required (Kathman 1991) force us to abandon Centrality. 

Optionality (non-heads are only optionally present) is strictly obeyed in Autolexical theory, because we hold that the cases cited by Pullum (1985) (e.g., John doesn't have) are all syntactically well-formed but fail to meet the conditions of the logico-semantics, in that have is a two-place relation and only one argument is accessible. Tense is not even a syntactic concept in our framework, being represented primarily on the logico-semantic dimension and, in languages with appropriate morphology, on the morphosyntactic dimension. Indeed, adherence to Optionality is one of the strong points of the Autolexical approach." 

Syntactic Categories

A syntactic category is a bundle of features. These features represent the syntactic type in terms of a head feature, a bar level, a subcategorization frame, and, in some languages, features which are used by the syntax to determine word order. Although no specific claims are made concerning the universal inventory of syntactic categories, we remain confident that the 5 categories listed below suffice for the description of all human languages, or at least those with which we are sufficiently acquainted. 

The categories are: Noun, Verb, Adposition, Adjective and Modifier 

Noun (syntactic category)

N0 is the category of a simple noun. It can be treated as the combination of two features (+N -V) as in other frameworks, though this is not necessary. 

Verb (syntactic category)

V0 is the category of a main verb. Not all of the items which are traditionally described as verbs fit into this category. We will see below that modals (e.g., should) belong to a different category. The category of verb can also be treated as the combination (-N +V).

Predicate Adjective (syntactic category)

A0 is the category of adjectives found in predicate positions. This should not be confused with the traditional definition of adjectives, which include words found in pre-nominal position. The word stupid in The policy is stupid is an adjective, but the same word in The economy was hampered by stupid policy is not. The reason for this will become clear in our discussion of X-bar theory below, but in a nutshell the point is that in the latter example the word stupid does not head a projection of an adjective, but rather simply combines with the noun policy to form a projection which is still headed by the noun. In many languages, the syntactic category of adjectives is not robust, and is virtually indistinguishable from the category of intransitive verbs. This category is also notated (+N +V). 

Adposition (syntactic category)

P0 is the category of adpositons, which includes both prepositions and post positions. These combine with noun phrases to form prepositional (or postpositional) phrases. This category has sometimes been notated as (-N -V). 

Modifier (syntactic category)

M is a label applied to categories which do not directly instantiate a head category. This is used to describe many things, including adverbs and modals and the infinitival marker to. The easiest way to understand the conception behind this label is that each item which is so marked will combine with another item, and will inherit the head feature of that item. So if we combine an adverb (M) with an adjective (A), the category of the combined constituent will always be (A). Pre-nominal adjectives in English fall into this scheme, since combining such words as stupid with nouns like policy creates a nominal constituent which is headed by N. 

Universality of Syntactic Categories

Why should this set of categories be postulated as applying in all languages? Could there not be languages which use only a subset of these categories, or different categories entirely? How can we claim that what constitutes a noun in one language is also to be treated as a noun in another language? 

This is not an easy matter, but there are some observations which can assist us in determining the answers to those questions. First of all, there tends to be a strong correlation between syntactic category and semantic type (Croft 1991). There are many exceptions to this correlation, as we will see. Nouns and verbs tend to be robust, open classes which exist in all known languages. Adpositions and Adjectives are less universal, and there exist languages which show no evidence of these categories. As for the large number of adverbial categories, they are far from universal. 

On a more formal level, we can ask the following question when faced with a possible new major category to complement N,V,A,P. Is the putative new category in addition to 4 other distinct and separate categories, or does it fit into some four-way distinction? 

Syntactic Features

The units of the syntax are items which are specified for syntactic category. Each category is a bundle of information which includes a head category, bar level, subcategorization frame and a feature list. Both head categories and bar levels can be seen as hierarchies in the revised model, though whether these small sets are ordered or not may not prove to be tremendously significant. 

Head categories were discussed in the previous section. 

Bar level is a member of the set {0,1,2} where 0 represents a lexical category, 2 is a maximal projection, and 1 is an intermediate projection. 

Subcategorization frames are rules involving pairs of categories of the form [A >> B] where A is the category with which the lexical item is combined to yield B. For example, we can say that a determiner (the) combines with an N-bar (big linebacker) to form a noun phrase (N2) by applying the rule encoded in the subcategorization frame of all determiners: [N1>>N2]. The redirection symbol >> is used to mark the combinatoric property of the subcategorization frame. We will return to these frames in more detail in our discussion of lexical entries below. 

Syntactic features vary greatly among languages. Specifications for tense, aspect, person, number and gender are found in many languages, but only a small subset of these languages actually mark these features in the syntax. As these are essentially semantic notions, they can be instantiated in a variety of ways, including morphological marking, syntactic units, or not at all. Consider tense, defined for the moment as a deictic marker of relative ordering in time. English marks this morphologically for the past tense and present tense. There are no rules of the syntax which refer to the choice between the two, but there are rules which are sensitive to whether or not a constituent is tensed, in that there are rules which apply only to finite clauses, and rules which apply only to non-finite clauses. The relationship between finiteness and tense is only indirect, in English, because we can have non-finite constituents in the past tense, e.g. I want to have finished this chapter by nine o'clock.

In English, agreement is generally a matter of matching the semantic facts with the morphological markers, but there are certainly cases where syntax plays a role, and these will be examined later on. In any case, the revised model suggests that these features may be part of hierarchies which interact with various dimensions. 

Syntactic Rules

All rules in autolexical work are context-free phrase structure rules. We adopt the ID/LP formulation used in GPSG (Gazdar et. al. 1985) where rules for dominance and linear precedence can be independently formulated. 

X > Y , Z: X immediately dominates Y and Z, but not that Y necessarily precedes Z. 

X > Y Z: X immediately dominates Y and Z, and Y precedes Z. 

X > Y: X precedes Y. 

The general X-bar schema provides a number of such rules automatically. 

Logico-Semantics (Detailed)

This dimension is concerned with the well-formedness of logical expressions. Until recently, this was a mixture of information about argument structure and quantification. We have now (finally) separated these into two distinct hierarchies, but both may be treated within the same dimension. 

Categories

Operator

An operator combines with a property or formula to create another category of the same type or different type. 

An example of a simple operator [o] is not

Property

A property is an element that  combines with a bound variable to create a formula (e.g., red). Properties are notated as [f] 

Entity

An entity is the semantic equivalent of a common noun, e.g., dog, which combines with a quantifier (binder expression) and a variable to create a bound variable. Entities are notated as [k].

Formula

A formula contains a simple predicate or a transitive predicate with one or more arguments. 

An example of a simple predicate [F-1] is going to the store.

An example of a transitive predicate [F-2] is broke

An example of a ditransitive predicate [F-3] is put

An example of a full formula [F0] is John is going to the store.

Variable

A variable [x] is a reference to a real-world object. 

Quantifier

A quantifier [q] combines with a property to form a binder expression [Q] which can be used to fill the argument structure of formulas. 

Example: three

Rules

The grammar of the logico-semantic dimension is a context-free grammar with dominance but not precedence rules. Among the hierarchies which reside here are quantification, tense, aspect, number and case.
 
 

Morphosyntax (Detailed)

The morphosynactic dimension varies considerably among languages not only with regard to combinatoric rules, as is also the case of syntax, but also in terms of the categories involved. Some languages have rich systems of affixation, such as Georgian and West Greenlandic, while others show almost no morphosyntactic activity at all, e.g., many Tai languages. 

Categories

Because the morphosyntax is so language-specific, the categories vary widely. It seems that categories consist of two attributes: combinatoric and head

Combinatoric features specify which other morphosyntactic categories the given category can combine with. A nominal suffix, for example, will combine with a noun-stem or item headed by a noun-stem.

The head feature determines the paradigmatic class of lexical item, for example verbal, nominal, adjectival, etc. 

Rules

Morphosyntax is concatenative. The only rule is that items combine to form larger units.

Morphophonology (Detailed)

Categories

The categories of morphophonology have yet to be worked out in detail. Many attributes are involved. One important element is the positional requirement of a lexeme, for example as a prefix, infix or suffix. The number of syllables sometimes determines eligibility for affixation, and this should be specified here. Possibly this should include counts of nuclei and other sub-syllabic entities. 

Attributes

The number of syllables [SL] 

the positions of affixation [AF] 

Rules

Concatenation is the primary operation on this dimension. Reduplication is also realized here. 

Discourse (Detailed)

The Discourse is a bit more complex than the other dimensions, but still has a basic X-bar representation, represented on the next page. 

The discourse dimension is linked to several external objects: the lexicon, the context register and the lexical memory register. 

Categories

The discourse categories include attributes which apply to lexemes, phrases, and utterances. It is reasonable to adopt our usual three-level X-bar system with the lexemes as terminal nodes [U0], phrases as intermediary projections [U1] and utterances as root nodes [U2]. Attribute percolation applies, but the attribute inventory does not include head, so all attributes percolate, concatenatively. Some features have no effect at the root node level, but can play an important role at the phrasal level. 

Attributes

Speaker Responsibility [SR]

The handling of speech act (illocutionary act) information was originally proposed by William H. Eilfort in 1989. Eilfort proposed two binary features to encode speech acts. Although these are traditionally considered matters of pragmatics (except in Generative Semantics, where they are treated as syntax), there is clear evidence that this information is grammaticized in many languages, and therefore there must be some level at which an autolexical analysis takes place. In the present model, these hierarchies are present in the discourse dimension. 

Speaker responsibility indicates whether the speaker takes responsibility for information in the utterance (broadly speaking in declarative statements this is positive, and in interrogative statements it is negative) and the other is concerned with whether or not information is being passed (In declarative statements and imperative statements it is, in most questions it is not.). Speaker Responsibility is a hierarchy, though most published work to date treats it as a binary feature. 

Information Passing [IP]

Information passing is concerned with whether or not information is being transmitted to the addressee. In declarative statements and imperative statements it is, in most questions it is not. 

Focus [FO]

Focus is a feature which is often mapped to the intonation contour, but can also be supplied lexically (e.g., in-fucking-credible where the infix does not have to bear intonational or phonological stress). In transformational frameworks,. focus is sometimes used to account for word order phenomena and can also be used to trigger some operations such as passive. In the present framework, focus plays a significant role at the surfotax level of the discourse dimension. 

Topic [TP]

Topic/comment structure plays a role in most langauges, and has an exceptionally powerful influence on surfotax (word order) in many. 

The topic feature is applied to the phrase which the speaker wishes to make the topic of conversation. 

The opposite end of the topic scale is the comment, the phrases which are specified as the comment to a particular topic. A comment feature cannot be instantiated unless there is also a topic. 

Number [NM]

The number hierarchy handles duality and plurality, and is used at the lexical level and phrasal level. Coordinated entities over-ride the singularity settings of the concatenated items, and is imposed by the coordinator itself as a lexical feature. 

Category [CA]

Category is a feature which maps real-world entities onto language specific paradigmatic categories such as gender. Mismatches between the real-world properties and lexicalized gender are resolved by the lexicon, where the lexicalized version is stored. 

Intonation [IN]

Intonation contours have not received much attention yet, but this is where they will be handled. 

Stress [ST]

Stress is a feature that can arise from lexical specification or via templatic assignation. This area also awaits serious investigation. 

Rules








bullet An utterance may contain a topic, and if there is a topic, there must be a comment. 
bullet An utterance consists of one or more phrases. 
bullet A phrase consists of one or more lexemes. 
bullet Every phrase has an intonation contour. 
bullet Every phrase is entered in the propositional register. 
bullet Every referenced entity is entered in the entity register. 
bullet Intonation contours must comply with the available contours for the language. 

Objects and Methods for Autolexical Grammar

30-June-97 This section has been replaced by a standalone document. Click here to see it.

Lexicon

09-June-97 This section has been replaced by a standalone document. Click here to see it.

Context Register

The Context Register lies outside grammar per se, but is used by the grammar. It is reasonable to assume that this, and lexical memory, are handled by normal memory and are not part of any language-specific area of the brain. 

The Context Register stores information about entries and propositions, including references to real-world objects (via anaphora, for example) and any item in the context register is available for reuse via citation or paraphrase in the discourse using anaphoric means. 

The form of the Context Register is a stack (last in, first out). 

For example, the anaphoric device do so makes reference to a predicate phrase which is also a VP in the syntax. That predicate phrase must be in the contextual register. 

Lexical Memory

Lexical memory is the actual real-world storage of the lexical items as reflected in the surfotax. These items are available for re-use in the discourse, and allow for reference to them. For example: You said "Newt is a moron" not "Newt is a fool".

The formal representation of each lexeme is its citation form in the lexicon. 

Methods

The rules for each dimension are private methods within the dimension object. Public functions provide exposure of grammatical objects to the interface for those objects which need to be exposed. 

Interface

The interface is invoked either during parsing or generation of utterances. It has a set of methods which compare the structural representations exposed by individual dimensions and check for compatibility. If discrepencies are found, then the interface determines whether the mismatches are allowed under the Generalized Interface Principle, which states that representations among dimensions must match to the maximum extent possible given the requirements of the lexical items involved. 

Autolexical Markup Language (ALEXML)

In anticipation of next-generation browsers, an XML implementation of ALGAE is being prepared. This will allow a great deal of flexibility in presentation of autolexical information. Preliminary information on this project is available in a separate document: Autolexical Markup Language.

Closing remarks

This document is a superficial overview of autolexical theory as implemented at Linguistics Unlimited in 1997. I hope to be able to expand on it in the future, but for now, it is as it is. I decided to circulate it in its current form to a select set of individuals who have sufficient familiarity with the subject matter to be able to understand it. 

By placing it on my web site, I have also made it available to many people for whom the subject matter is not as familiar. Since my goal is to refine this document so that it can be understood by the computer science community and the general public, I welcome input from anyone with suggestions on how to improve it, but do wish to point out that at present it is a very terse document with few examples and explanations. 

Comments and suggestions are most welcome via email at: linguist@chessworks.com 

References

In general, the best way to find important autolexical papers is to examine the volumes of the Chicago Linguistics Society (CLS:.

Related to Autolexical Theory

Chelliah, Shobhana. 1989. An Autolexical account of voicing assimilation in Manipuri. In Schiller, Steinberg & Need. 

Eilfort, William and Eric Schiller. 1990. "Pragmatics and Grammar: Cross-Modular Relations in Autolexical Theory." CLS 26.1. 125-136. 

Faarlund, Jan Terje. 1989. "Autostructural Analysis." In Schiller, Steinberg & Need 

Graczyk, Randolph. 1991. "Incorporation and Cliticization in Crow Morphosyntax." Ph.D., University of Chicago, 1991. 

Kathman, David. 1992. "Control in Autolexical Syntax." In Schiller, Steinberg & Need. 

Lapointe, Steven G. 1987. "Some Extensions of the Autolexical Approach to Structural Mismatches." In Syntax and Semantics, Volume 20: Discontinuous Constituency, ed. Geoffrey J. Huck and Almerindo E. Ojeda. 152-184. 20. Orlando: Academic Press, 1987. 

Sadock, Jerrold M. 1983. "The necessary overlapping of grammatical components." CLS 19.2 (1983): 198-221. 

Sadock, Jerrold M. 1986. "An autolexical view of pronouns, anaphora, and agreement." UCWPIL 2 (1986): 143-164. 

Sadock, Jerrold M. 1985. "Autolexical Syntax: A Proposal for the Treatment of Noun Incorporation and Similar Phenomena." NLLT 3 (1985): 379-439. 

Sadock, Jerrold M. 1989. Some pleasures and pitfalls of Autolexical Syntax. In Schiller, Steinberg & Need. 

Sadock, Jerrold M. 1991. Autolexical Syntax: A Theory of Parallel Grammatical Representations. Studies in Contemporary Linguistics, Chicago: University of Chicago Press, 1991. 

Sadock, Jerrold M. 1992 "A Paper on Yiddish for James D. McCawley." In The Joy of Grammar, ed. Diane Brentari, Gary N. Larson, and Lynn A. MacLeod. 323-328. Amsterdam: John Benjamins, 1992. 

Sadock, Jerrold M. and Eric Schiller. 1993. "The Generalized Interface Principle" CLS 29. 

Schiller, Eric. 1989a. "Syntactic Polysemy and Underspecification in the Lexicon." BLS 15 (1989): 278-290. 

Schiller, Eric. 1990. "Focus and the Discourse Dimension in Autolexical Theory." ESCOL 7 (1990): 

Schiller, Eric. 1991. "An Autolexical Account of Subordinating Serial Verb Constructions." Ph.D., University of Chicago, 1991. 

Schiller, Eric. 1992. "Infixes: Clitics at the Morphophonological Level." CLS 28: 

Schiller, Eric & Barbara Need. 1992. "The Liberation of Minor Categories: Such a Nice Idea!" CLS 28. 

Schiller, Eric. 1995. "Not yes, not no: The Zen of Khmer Discourse Particles. BLS 21s: 107-113. 

Schiller, Eric. 1995. "Expressives: Inside or outside of Grammar. CLS 31.

Schiller, Eric. 1996. Performatives in Autolexical Grammar. CLS 32. 

Schiller, Eric, Elisa Steinberg & Barbara Need. 1995. Autolexical Syntax: Ideas and Methods. Berlin: Mouton. 

Schneider, Robinson. 1989. "Toward a tri-modular analysis of -ly adverbs." In Schiller, Steinberg & Need. 

Smessaert, Hans. 1988. "An Autolexical Syntax Approach to Pronominal Cliticization in West Flemish." M.A., University of Chicago, 1988. 

Smessaert, Hans. 1991. "Pronominal Cliticization in West Flemish." In Schiller, Steinberg & Need 

Other useful references

Croft, William. 1991. Syntactic Categories and Grammatical Relations: The Cognitive Organization of Information. Chicago: University of Chicago Press, 1991. 

Emonds, J. A 1986. Unified Theory of Syntactic Categories. Dordrecht: Foris.

Gupta, Anil. 1980. The Logic of Common Nouns: An investigation in quantified modal logic. New Haven: Yale University Press. 

Jackendoff, Ray. 1977. X-Bar Syntax: A Study of Phrase Structure. Cambridge, MA: MIT Press, 1977. 

Kornai, Andras & Geoffrey K. Pullum. 1990. "The X-bar theory of phrase structure." Language 66.1: 24-50. 

Leer, Jeff. 1991. "Schetic Categories of the Tlingit Verb." Ph.D., University of Chicago, 1991. 

McCawley, James D. 1988. The Syntactic Phenomena of English. Chicago: University of Chicago Press, 1988. 

Pollard, Carl and Ivan Sag. 1988. "An Information-Based Theory of Agreement." CLS 24.2 (1988): 236-257. 

Pullum, Geoffrey. "Assuming Some Version of X-Bar Theory." CLS 21 (1 1985): 323-353. 

BLS = Berkeley Linguistic Society

CLS = Chicago Linguistic Society

ESCOL = Eastern States Conference on Linguistics

Credits

The ideas contained in this document were not born in a vacuum. Tremendous credit must be given to Jerry Sadock, the founder of Autolexical theory, my frequent co-author Barbara Need, David Kathman and most of the faculty and graduate students at the University of Chicago from 1985 to 1992. Still, this particular implementation cannot be blamed on them. All errors in form, content, analysis or judgment are mine alone. 

This document was created by Eric Schiller using: 

Microsoft Word for Windows (Microsoft Corporation) 

Microsoft Access for Windows (Microsoft Corporation) 

Front Page 98 (Microsoft Corporation)

Linguistics Template for Microsoft Word (Linguistics Unlimited)