A Computational Algebraic Approach to English Grammar

.  Whereas type-logical grammars treat syntactic derivations as logical proofs, usually represented by two-dimensional diagrams, I here wish to defend the view that people process linguistic information by one-dimensional calculations and will explore an algebraic approach based on the notion of a ‘‘pregroup,’’ a partially ordered monoid in which each element has both a left and a right ‘‘adjoint.’’ As a first approximation, say to English, one assigns to each word one or more ‘‘syntactic types,’’ elements of the free pregroup generated by a partially ordered set of ‘‘basic types,’’ in the expectation that the grammaticality of a string of words can be checked by a calculation on the corresponding types. This theoretical framework provides a simple foundation for a kind of feature checking that may be of general interest. According to G. A. Miller, there is a limit to the temporary storage capacity of our short-term memory, which cannot hold more than seven (plus or minus two) ‘‘chunks’’ of information at any time. I explore here the possibility of identifying these chunks with ‘‘simple types,’’ which are obtained from basic types by forming iterated adjoints. In a more speculative vein, I attempt to find out how so-called constraints on transformations can be framed in the present algebraic context.