Events

Distinguished phonologist Paul Smolensky, Krieger-Eisenhower Professor of Cognitive Science at Johns Hopkins University, will be giving a series of three lectures, generously supported by Dave Baggett.

Overview of the lectures

A fundamental task of cognitive science is reconciling (i) the discrete, categorical character of mental states and knowledge — e.g., symbolic expressions governed by symbolic rules of grammar or logic — with (ii) the continuous, gradient character of neural states and processes. This year’s Baggett Lectures will present an approach to unifying discrete symbolic and continuous neural computation: Gradient Symbolic Computation (GSC). This unification leads to new grammatical theories as well as novel neural network architectures that realize these theories. The importance of reconciling symbolic and neural network computation now extends beyond basic science into applied Natural Language Processing, where the best-performing systems utilize neural networks, but it is not currently known how to construct networks that enable rapid instruction, human understanding of internal knowledge, and competence in a diversity of tasks — all properties that are characteristic of symbolic systems.

Lecture 1, Unifying discrete linguistic computation with continuous neural computation (Nov.16, 3:30, Maryland Room)

GSC’s novel neural architecture — capable of encoding and processing symbol structures — will be presented, and the new grammatical theories that emerge from this architecture will be described and illustrated: theories in which grammars are evaluators of well-formedness, and grammatical structures are those that are maximally well-formed or optimal. Gradient Symbol Structures will be defined: these are structures (such as phonological strings or syntactic trees) in which each single location hosts a blend of symbols, each present (or active) to a continuously variable degree.

Lecture 2, Gradient symbols in grammatical competence (Nov.17, 3:30, Maryland Room)

Use of gradient symbol structures in theories of grammatical competence will be illustrated by partially-present constituents in base positions of syntactic wh-movement, partially-present [voice] features in final consonants in certain final-devoicing languages, and, most extensively, partially-present consonants in underlying forms of French words participating in liaison — consonants which disappear in contexts where fully-present consonants remain. The liaison case illustrates how gradient versions of multiple distinct structures posited by competing theories can be blended to form an account that covers a range of data that no single structure can explain.

Lecture 3, Gradient symbols and graded universals in grammatical processing and learning (Nov.18, 10:00, Maryland Room)

Gradient Symbolic Computation process models of incremental (word-by-word) syntactic parsing will be discussed, as well as process models of graded probabilistic biases in language learning and the potential role of such biases in explaining statistical typological universals.