Reachable environments – the tops of our desks, counters, or workbenches – form the background to many of our everyday activities. As soon as we walk up to such a space, we understand what objects it contains, how it is laid out, and what actions it affords. How are these reachable views represented in the brain, to make such rapid understanding possible? What brain regions are recruited, and what aspect of the environment are they encoding? What kinds of computations are required in this process, and do these computations differ fundamentally from those that support our understanding of navigable environments and single objects?

I am a graduate student working on these questions with Talia Konkle in the Cognitive and Neural Organization Lab at Harvard University. Previously, I worked as a research assistant with Jeremy Wolfe at the Visual Attention Lab at Brigham at Women’s hospital, and as an undergraduate thesis student with Sean MacEvoy at the Vision and Cognition Lab at Boston College. Outside of my research, you can find me out hiking, climbing, kayaking or sitting on the couch watching hockey.