Colloquia, Workshops, Dialogues And Tutorials
2025-2026
Fall
COGSCI Kickoff!
Date: September 29th, @4pm-6pmLocation: Five and Dime
Cognitive Science Speaker Series

Dr. Marisa Casillas
University of Chicago, Department of Comparative Human Development
Date: October 21st, 4:00pm
Location: Swift Hall 107
Website: Marisa Casillas, Assistant Professor
Title: What makes some words harder to learn?
Abstract:
How and when children learn words is shaped by a variety of cognitive (internal) and environmental (external) factors. Clear and consistent patterns in learning have led us to, e.g., conclude that abstract and relational concepts take longer to acquire than concrete ones (internal), and that frequency in child-directed speech straightforwardly predicts many aspects of learning (external). However, a paucity of comparative data from diverse developmental contexts has prevented us from drawing strong conclusions about the universality of these biases in word learning. Given that children's early home language environments are hugely variable across the world's communities, *and* given that languages show minimal evidence for universal structural traits, the key theoretical issue for developmental language science is to capture the experiences and mechanisms that give children the flexible adaptability to become an adept local language learner. To ground our discussion of learning relational words, I'll discuss my lab's recent work on how children learn nouns, verbs, and kinterms in the US, Mexico, Papua New Guinea, and China. I'll use these examples to consider what kinds of adaptations children may make during learning and briefly discuss how we might try and model this adaptive system for learning word meanings.
Cognitive Science Speaker Series
Faculty Flash Talks
Date: November 11, 2025 @ 4:00pm
Location: Swift Hall 107
Faculty Speakers:
Monita Chatterjee , Communication Sciences and Disorders
Jiayi Lu, Department of Linguistics
Winter
Cognitive Science Fridays
Mind & Machine Collaboration
Erika Exton, Linguistics, Psychology, Psychiatry & Behavioral Science
Date: January 9th
Location: Kellogg Hub 4302
Time: 12:30pm
RSVP: Cognitive Science Fridays - January 9th - Erika Exton – Fill out form
Title: "Balancing Interpretability, Objectivity, and Automaticity When Using Speech to Study Major Depressive Disorder"
Abstract:
When studying cognition, particularly in the context of sensitive mental health data, it is critical that our results be interpretable — transparently linked to hypothesized cognitive mechanisms. How can we respect interpretability while capitalizing on the efficiency and objectivity afforded by machine learning tools? I’ll discuss these issues in the context of my current research (bridging linguistics, communication science and disorders, and psychology). Using machine learning tools, I objectively and efficiently study speech in individuals with depression in a way that is interpretable, constrained, and theory-driven. Instead of taking a black-box approach to detect whether an individual is depressed from a short sample of speech, I use machine learning tools to extract interpretable and mechanistically-meaningful speech features, relating them to disruptions to motor control that occur in depression (i.e. slowed, jerky movements). I used a custom deep learning algorithm to automatically assess speech features related to motor control and compared those patterns to performance in manual motor tasks. Results indicate that these measures are correlated in individuals who are currently depressed, suggesting that motor control dysfunction in depression may impact speech and manual motor behaviors similarly and that we can objectively and efficiently measure motor dysfunction using speech-based deep learning methods. The next stage of this research is to explore depression-related disruption to other aspects of cognitive functioning and speech using machine learning/AI approaches to efficiently measure a range of linguistic features at scale.
Cognitive Science Fridays
Mind & Machine Collaboration
Di Hu, Human Development and Social Policy
Date: February 13th
Location: Kellogg Hub 4302
Time: 12:30pm
Title: "Personalized Support for Adolescent Minds: A Memory-Enhanced AI Companion for School Engagement"
Abstract:
Adolescence is a sensitive window for cognitive and socio-emotional change. However, this critical phase now occurs within increasingly AI-saturated environments. Adolescents’ growing engagement with AI creates emerging risks but also a unique window for AI-driven personalized support. Our research program addresses the challenges in two stages.
First, to systematically evaluate the impact of AI on adolescent development, we conducted a PRISMA-guided systematic review of AI-driven applications for adolescents (aged 13–18) to analyze current impact and interpret into design implications. Mapping outcomes to a cognition–affect–behavior (CAB) framework, the corpus (77 studies; 254 effect sizes) revealed a critical gap. Overall, AI-supported interventions yielded a small but positive impact on adolescent outcomes (g = 0.15). Analyses revealed that pattern effects were strongest for affective and motivational outcomes (e.g., self-efficacy, engagement; g = 0.27, k=29), suggesting AI’s potential as a supportive companion. Cognitive learning gains were positive but modest (g = 0.16, k=54), while behavioral outcomes (e.g., retention, physical activity) showed negligible effects (g = 0.04, k=24), highlighting a gap between digital engagement and real-world behavioral transfer. Narrative synthesis identified intelligent tutoring and mental health chatbots as active areas but also highlighted risks of over-trust, bias, and privacy.
Building on these findings, we developed a memory-enhanced AI companion designed to address the identified gaps in affective, cognitive, and behavioral support. Powered by a Large Language Model (LLM) with a transparent knowledge graph, the system is engineered to deliver Just-in-Time Adaptive Interventions (JITAI) tailored to adolescents’ developmental needs (e.g., autonomy, competence). Unlike generic chatbots, this system maintains long-term context to model user states and deliver evidence-based micro-interventions (e.g., emotion regulation prompts, strategy coaching). The architecture prioritizes safety and interpretability, providing transparent and verifiable AI interactions that empower adolescents while reassuring parents and educators.
Together, this program connects empirical evidence with technical design, establishing a safety-centered foundation for responsible and personalized AI support for adolescents.
Cognitive Science Speaker Series

Dr. Mark Steyvers
University of Chicago, Department of Comparative Human Development
Date: February 17th, 4:00pm
Location: Swift Hall 107
Website: Mark Steyvers, Professor and Chair
Title: Human–AI Collaboration: Performance, Uncertainty, and Human Preferences
Abstract:
This talk presents an overview of recent research on human–AI collaboration, focusing on the conditions under which collaboration improves performance and the challenges that arise as AI systems become increasingly capable. I begin by reviewing several collaborative workflows, including non-interactive statistical aggregation, AI-assisted decision-making in which humans receive AI advice, and agentic settings where humans and AI systems jointly act in dynamic environments. Across these paradigms, I review empirical findings on when and why collaboration improves or fails.
A central challenge is achieving complementarity—cases in which joint human–AI performance exceeds that of either humans or AI alone. I discuss empirical evidence on when complementarity arises and key factors that limit it. As AI systems increasingly outperform humans, opportunities for complementarity diminish, shifting the problem toward managing asymmetric collaborations. In these settings, effective uncertainty communication is essential so that humans can appropriately rely on AI outputs.
I review recent evidence showing that large language models maintain internal signals of uncertainty but often fail to communicate this uncertainty effectively to users, leading to overconfidence and overreliance. Behavioral studies demonstrate substantial gaps between model confidence and human perceptions of confidence, which are further amplified by features such as explanation length. However, targeted fine-tuning can improve uncertainty communication by aligning verbal expressions of confidence with internal reliability signals, improving both calibration and discrimination.
Finally, I discuss how human subjective evaluations shape collaboration, especially in agentic contexts. Beyond accuracy and efficiency, people value AI collaborators that behave cooperatively, allow meaningful human contribution, and are enjoyable to work with. Together, these findings highlight the importance of aligning performance, uncertainty communication, and human preferences in the design of collaborative AI systems.
Cognitive Science Fridays
Mind & Machine Collaboration
Zhe-Chen Guo, Linguistics, Communication Sciences & Disorders
Date: March 13th
Location: Kellogg Hub 4302
Time: 12:30pm
Title: "Consistency of Phoneme-Related Potentials to Naturalistic Continuous Speech"
Abstract:
TBA
Spring
Cognitive Science Fridays
Mind & Machine Collaboration
Yi Chun Hung, Computer Science
Date: April 10th
Location: Kellogg Hub 4302
Time: 12:30pm
Title: "When Color Becomes Threat: Mapping High-Dimensional Signals with Behavioral Manifolds in Mantis Shrimp"
Abstract:
TBA
Cognitive Science Speaker Series
Dr. Jessica Grahn, Western University, Psychology Department
Date: May 5th, at 4:00pm
Location: Swift Hall 107
Website: Jessica Grahn, Professor
Title: TBA
Abstract:
TBA
Cognitive Science Fridays
Mind & Machine Collaboration
YuanYang "YY" Teng, Technology and Social Behavior
Date: May 8th
Location: Kellogg Hub 4302
Time: 12:30pm
Title: "GenAl at the Divergent-to-Convergent Inflection: Supporting Decision-Making Under Uncertainty for Blind and Low-Vision Individuals"
Abstract:
TBA
Cognitive Science Fridays
Mind & Machine Collaboration
Xudong Tang, Computer Science, NICO
Date: June 5th
Location: Kellogg Hub 4302
Time: 12:30pm
Title: "Mapping Human and Machine Perception of Voice Identity Similarity"
Abstract:
TBA