How do brains enable organisms to learn from observing and interacting with the world? How do their architectural constraints shape this learning and the structure of emergent neural representations? How can artificial intelligence inform our understanding of biological intelligence, and vice-versa?
These are some of the questions I am working on as a Postdoctoral Fellow in the Harvard Vision Sciences Lab, in the Psychology Department and Kempner Institute for Natural and Artificial Intelligence. I am primarily advised by Talia Konkle, and collaborate with George Alvarez and others in the Vision Sciences Lab, along with Hanspeter Pfister and others in the Visual Computing Group. Currently, I am focused on developing computational vision models that learn to see more like humans, and can provide greater insights into the neural computations underlying high-level vision. A particular active interest is foveation, whereby the human retina over-samples in the center of gaze (fovea), leading to a systematic over-representation of the fovea in the retinotopic maps of the visual cortex. I am currently studying how this affects high-level vision – providing efficient sampling, demanding active mechanisms, and providing particular signals for self-supervised learning – as well as the organization of high-level visual cortex – with the retinotopic organization scaffolding all later processes. Here, as in my prior work, I am using the modern AI toolkit to implement classic and new ideas from neuroscience and cognitive science, in order to make theoretical scientific advances.
I received my Ph.D in Neural Computation from Carnegie Mellon University in December 2023, where I was advised by David C. Plaut and Marlene Behrmann. My Ph.D work involved the development of computational models of familiar and unfamiliar face recognition, and of cortical organization for visual domains, as well as empirical investigations into the hemispheric organization of high-level visual cortex. Much of my research continues to build upon the topographic models I developed in my PhD, for example extending it to account for the influence of long-range connectivity on the global organization of human ventral temporal cortex, and in modeling the spatial organization of language processing through topographic Transformer language models (see also our newer work: topoLM). I see this work as a set of critical first steps towards the development of large-scale, spatially-embedded, functional models of the human brain, a challenging task which I hope to make much progress on in the coming years.
Practically, my work aims to be useful for building efficient, sustainable AI; as impressive as it is, AI is currently orders of magnitude less energetically efficient than the human brain, which runs on ~20 watts. The spatial embedding of neural computations (at many levels, from dendrites, to columns, hypercolumns, areas, and networks) seems to be a key motif that may contribute to the brain’s remarkable energetic efficiency.
Some pretty but uninformative pictures of me