Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Being pragmatic about syntactic bootstrapping

Syntactic and pragmatic cues to the meanings of modal and attitude verbs.

Linguistics

Author/Lead: Valentine Hacquard
Dates:

Words have meanings vastly undetermined by the contexts in which they occur. Their acquisition therefore presents formidable problems of induction. Lila Gleitman and colleagues have advocated for one part of a solution: indirect evidence for a word’s meaning may come from its syntactic distribution, via SYNTACTIC BOOTSTRAPPING. But while formal theories argue for principled links between meaning and syntax, actual syntactic evidence about meaning is noisy and highly abstract. This paper examines the role that syntactic bootstrapping can play in learning modal and attitude verb meanings, for which the physical context is especially uninformative. I argue that abstract syntactic classifications are useful to the child, but that something further is both necessary and available. I examine how pragmatic and syntactic cues can combine in mutually constraining ways to help learners infer attitude meanings, but need to be supplemented by semantic information from the lexical context in the case of modals.

Read More about Being pragmatic about syntactic bootstrapping

Individuals versus ensembles and "each" versus "every": Linguistic framing affects performance in a change detection task

More evidence that "every" but not "each" evokes ensemble representations.

Linguistics

Contributor(s): Jeffrey Lidz, Paul Pietroski
Non-ARHU Contributor(s):

Tyler Knowlton *21, Justin Halberda, 

Dates:

Though each and every are both distributive universal quantifiers, a common theme in linguistic and psycholinguistic investigations into them has been that each is somehow more individualistic than every. We offer a novel explanation for this generalization: each has a first-order meaning which serves as an internalized instruction to cognition to build a thought that calls for representing the (restricted) domain as a series of individuals; by contrast, every has a second-order meaning which serves as an instruction to build a thought that calls for grouping the domain. In support of this view, we show that these distinct meanings invite the use of distinct verification strategies, using a novel paradigm. In two experiments, participants who had been asked to verify sentences like each/every circle is green were subsequently given a change detection task. Those who evaluated each-sentences were better able to detect the change, suggesting they encoded the individual circles' colors to a greater degree. Taken together with past work demonstrating that participants recall group properties after evaluating sentences with every better than after evaluating sentences with each, these results support the hypothesis that each and every call for treating the individuals that constitute their domain differently: as independent individuals (each) or as members of an ensemble collection (every). We situate our findings within a conception of linguistic meanings as instructions for thought building, on which the format of the resulting thought has consequences for how meanings interface with non-linguistic cognition.

Read More about Individuals versus ensembles and "each" versus "every": Linguistic framing affects performance in a change detection task

Psycholinguistic evidence for restricted quantification

Determiners express restricted quantifiers and not relations between sets.

Linguistics, Philosophy

Contributor(s): Jeffrey Lidz, Alexander Williams, Paul Pietroski
Non-ARHU Contributor(s):

Tyler Knowlton *21, Justin Halberda (JHU)

Dates:
Publisher: Springer

Quantificational determiners are often said to be devices for expressing relations. For example, the meaning of every is standardly described as the inclusion relation, with a sentence like every frog is green meaning roughly that the green things include the frogs. Here, we consider an older, non-relational alternative: determiners are tools for creating restricted quantifiers. On this view, determiners specify how many elements of a restricted domain (e.g., the frogs) satisfy a given condition (e.g., being green). One important difference concerns how the determiner treats its two grammatical arguments. On the relational view, the arguments are on a logical par as independent terms that specify the two relata. But on the restricted view, the arguments play distinct logical roles: specifying the limited domain versus supplying an additional condition on domain entities. We present psycholinguistic evidence suggesting that the restricted view better describes what speakers know when they know the meaning of a determiner. In particular, we find that when asked to evaluate sentences of the form every F is G, participants mentally group the Fs but not the Gs. Moreover, participants forego representing the group defined by the intersection of F and G. This tells against the idea that speakers understand every F is G as implying that the Fs bear relation (e.g., inclusion) to a second group.

Read More about Psycholinguistic evidence for restricted quantification

Moving away from lexicalism in psycho- and neuro-linguistics

Moving away from lexicalism in psycho- and neuro-linguistics

Linguistics

Contributor(s): Ellen Lau, Alex Krauska
Dates:

In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.

Read More about Moving away from lexicalism in psycho- and neuro-linguistics

A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Is processing subject-relative clauses easier even in an ergative language?

Linguistics

Contributor(s): Ellen Lau, Maria Polinsky
Non-ARHU Contributor(s):

Nancy Clarke, Michaela Socolof, Rusudan Asatiani

Dates:

A fascinating descriptive property of human language processing whose explanation is still debated is that subject-gap relative clauses are easier to process than object-gap relative clauses, across a broad range of languages with different properties. However, recent work suggests that this generalization does not hold in Basque, an ergative language, and has motivated an alternative generalization in which the preference is for gaps in morphologically unmarked positions—subjects in nominative-accusative languages, and objects and intransitive subjects in ergative-absolutive languages. Here we examined whether this generalization extends to another ergative-absolutive language, Georgian. ERP and self-paced reading results show a large anterior negativity and slower reading times when a relative clause is disambiguated to an object relative vs a subject relative. These data thus suggest that in at least some ergative-absolutive languages, the classic descriptive generalization—that object relative clauses are more costly than subject relative clauses—still holds.

Read More about A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

The Binding Problem 2.0: Beyond Perceptual Features

On the problem of binding to object indices, beyond perceptual features.

Linguistics

Contributor(s): Ellen Lau, Xinchi Yu
Dates:

The “binding problem” has been a central question in vision science for some 30 years: When encoding multiple objects or maintaining them in working memory, how are we able to represent the correspondence between a specific feature and its corresponding object correctly? In this letter we argue that the boundaries of this research program in fact extend far beyond vision, and we call for coordinated pursuit across the broader cognitive science community of this central question for cognition, which we dub “Binding Problem 2.0”.

Read More about The Binding Problem 2.0: Beyond Perceptual Features

Parser-Grammar Transparency and the Development of Syntactic Dependencies

Learning a grammar is sufficient for learning to parse.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

A fundamental question in psycholinguistics concerns how grammatical structure contributes to real-time sentence parsing and understanding. While many argue that grammatical structure is only loosely related to on-line parsing, others hold the view that the two are tightly linked. Here, I use the incremental growth of grammatical structure in developmental time to demonstrate that as new grammatical knowledge becomes available to children, they use that knowledge in their incremental parsing decisions. Given the tight link between the acquisition of new knowledge and the use of that knowledge in recognizing sentence structure, I argue in favor of a tight link between grammatical structure and parsing mechanics.

Read More about Parser-Grammar Transparency and the Development of Syntactic Dependencies

On substance and Substance-Free Phonology: Where we are at and where we are going

On the abstractness of phonology.

Linguistics

Contributor(s): Alex Chabot
Dates:

In this introduction [to this special issue of the journal, on substance-free phonology], I will briefly trace the development of features in phonological theory, with particular emphasis on their relationship to phonetic substance. I will show that substance-free phonology is, in some respects, the resurrection of a concept that was fundamental to early structuralist views of features as symbolic markers, whose phonological role eclipses any superficial correlates to articulatory or acoustic objects. In the process, I will highlight some of the principal questions that this epistemological tack raises, and how the articles in this volume contribute to our understanding of those questions

Read More about On substance and Substance-Free Phonology: Where we are at and where we are going

Underspecification in time

Abstracting away from linear order in phonology.

Linguistics

Contributor(s): William Idsardi
Dates:

Substance-free phonology or SFP (Reiss 2017) has renewed interest in the question of abstraction in phonology. Perhaps the most common form of abstraction through the absence of substance is underspecification, where some aspects of speech lack representation in memorized representations, within the phonology or in the phonetic implementation (Archangeli 1988, Keating 1988, Lahiri and Reetz 2010 among many others). The fundamental basis for phonology is argued to be a mental model of speech events in time, following Raimy (2000) and Papillon (2020). Each event can have properties (one-place predicates that are true of the event), which include the usual phonological features, and also structural entities for extended events like moras and syllables. Features can be bound together in an event, yielding segment-like properties. Pairs of events can be ordered in time by the temporal logic precedence relation represented by ‘<’. Events, features and precedence form a directed multigraph structure with edges in the graph interpreted as “maybe next”. Some infant bimodal speech perception results are examined using this framework, arguing for underspecification in time in the developing phonological representations.

Read More about Underspecification in time

Structure. Concepts, Consequences, Interactions

Natural phenomena, including human language, are not just series of events but are organized quasi-periodically; sentences have structure, and that structure matters.

School of Languages, Literatures, and Cultures, Linguistics

Author/Lead: Juan Uriagereka, Howard Lasnik
Dates:
book cover for Structure - Concepts, Consequences, Interactions

Howard Lasnik and Juan Uriagereka “were there” when generative grammar was being developed into the Minimalist Program. In this presentation of the universal aspects of human language as a cognitive phenomenon, they rationally reconstruct syntactic structure. In the process, they touch upon structure dependency and its consequences for learnability, nuanced arguments (including global ones) for structure presupposed in standard linguistic analyses, and a formalism to capture long-range correlations. For practitioners, the authors assess whether “all we need is Merge,” while for outsiders, they summarize what needs to be covered when attempting to have structure “emerge.”

Reconstructing the essential history of what is at stake when arguing for sentence scaffolding, the authors cover a range of larger issues, from the traditional computational notion of structure (the strong generative capacity of a system) and how far down into words it reaches, to whether its variants, as evident across the world's languages, can arise from non-generative systems. While their perspective stems from Noam Chomsky's work, it does so critically, separating rhetoric from results. They consider what they do to be empirical, with the formalism being only a tool to guide their research (of course, they want sharp tools that can be falsified and have predictive power). Reaching out to sceptics, they invite potential collaborations that could arise from mutual examination of one another's work, as they attempt to establish a dialogue beyond generative grammar.

Read More about Structure. Concepts, Consequences, Interactions