Skip to main content

Michael Jones, 4:00pm Tuesday January 21st

jones-banner.jpg

Michael Jones

 Indiana University

    Tuesday, January 21st
    Swift 107
    4:00pm (Reception to follow)

The stability-plasticity dilemma in predictive neural network models of semantic memory

The field of distributional semantics is generally concerned with elucidating the computational mechanisms that humans use to learn word meanings from statistical redundancies in the language environment. In the past 20 years, the most successful models have constructed semantic representations by applying dimensional reduction algorithms to observed word co-occurrences. But this field has not been immune to the huge influence of deep learning models from machine learning and their remarkable quantitative accuracies. We have recently seen the emergence of predictive neural network models that use principles of reinforcement learning to create a “neural embedding” of word meaning from a language corpus.

These models have taken the field by storm, partially due to the resurgence of connectionist architectures, but also due to their remarkable accuracy at fitting human data. However, predictive embedding models also inherit the weaknesses of their ancestors. In this talk, I will evaluate how susceptible modern neural network models are to the classic problem of catastrophic interference as a function of the sequence in which word senses are learned. I will also evaluate how much of a neural network’s power comes from its theoretical claims versus atheoretical machine-learning modifications designed to boost algorithmic efficiency (e.g., negative sampling and frequency subsampling). I will make an argument in favor of much simpler Hebbian learning models of lexical semantics over error-driven neural networks, and call into question whether prediction/error-correction is the brain’s default mode for general semantic learning.


Bio:
Mike Jones is Professor of Psychology, Cognitive Science, and Informatics at Indiana University where he holds the William and Katherine Estes Endowed Chair in Cognitive Modeling. He completed his graduate work at Queen’s University, and was subsequently a postdoc at the Institute of Cognitive Science, University of Colorado. Mike’s research focuses on knowledge representation in humans and machines, with specific focus on the computational mechanisms used by the brain to learn and represent lexical semantics from multisensory data. He has been awarded Outstanding Career Awards from the National Science Foundation, the Federation of Behavioral and Brain Sciences, and the Psychonomic Society. Mike is currently Editor-in-Chief for Behavior Research Methods, and Editor of the recent book Big Data in Cognitive Science. His research is funded by the National Science Foundation, National Institutes of Health, Institute of Education Sciences, Clinical and Translational Sciences Institute, and Google Research.