Imagine you are working on a computer, cutting a steak, painting a picture, or soldering a wire. When performing any of these activities, you’ll find yourself interacting with a very particular kind of space: space within a few feet of the body, which contains multiple objects and affords hand-based actions. How are such spaces represented in the brain and mind? Current theories of visual processing focus on how we perceive objects (3-D spatially bounded entities) and scenes (large-scale indoor or outdoor spaces), but relatively little work has tried to apply these theories to explain how we process the small-scale spaces in which we perform most of our every-day tasks. My work aims to fill this gap.

At the moment, I call these kinds of spaces Reachspaces. Here are some questions I am currently pursuing.

 

What visual features characterize reachspaces?

Things that belong to the same category tend to look alike. For scenes, member of a category (such as “forest” or “field”) share global features such as openness, mean depth and navigability with other member of the category (Greene & Oliva, 2009).  Do reachspaces have characteristic features that set them apart from scenes? If so, what are those reachspace primitives like? We are exploring these questions with both behavioral and computational methods.

What are the neural correlates of reachspace perception?

In the brain, objects and scenes are processed in distinct but overlapping networks. This is in part because objects and scenes are very different kinds of inputs, and so require different processing pathways to be understood. How do each of these pathways contribute to the processing of reachspaces? And importantly, given the large differences between reachspaces, scenes and objects, could there be a network specialized for processing reachspaces? We are using functional magnetic resonance imaging to explore how and where near-space is processed in the brain.