Imagine you are working on a computer, cutting a steak, painting a picture, or soldering a wire. Such activities take place in a very particular kind of environment: spaces within a few feet of the body, which contain multiple objects and afford hand-based actions. How are such spaces represented in the brain and mind? Current theories of visual processing can account for how we perceive objects (3-D spatially bounded entities) and scenes (navigable-scale indoor or outdoor spaces), but it is not clear how these theories apply to the reachable-scale spaces in which we perform most of our every-day tasks.

In my work, I examine how the visual system analyzes and represents reachable environments. I use the term “reachspaces” to refer to such environments, and operationalize them as near-scale spaces, within 3-4 feet of the body, which consist of collections of objects on a horizontal surface, and which support task-oriented behavior.

 

Here are some questions I am currently pursuing, using a combination of functional neuroimaging, behavioral psychophysics, and machine vision models.

 

How are reachable environments represented in the brain?

In the brain, objects and scenes are processed in distinct networks. Scene-selective regions represent the geometric layout of a space, its navigational affordances, and its relationship to the larger environment (Epstein & Baker, 2019). Object-selective regions represent the shape objects, in a way in a manner that is robust to confounding low-level contours,  and minor changes in size or position (Grill-Spector, Kourtzi, & Kanwisher, 2001). How do each of these pathways contribute to the processing of reachable environments? And are there regions outside of these networks, with different kinds of representations, which contribute to understanding reachable environments in particular?

 

Related publications: Josephs, E.L. & Konkle, T. (under review). Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex. preprint

 

What visual features characterize reachable environments?

Things that belong to the same category tend to look alike. For scenes, member of a category (such as “forest” or “field”) share global features such as openness, mean depth and navigability with other member of the category (Greene & Oliva, 2009).  Do reachable environments have characteristic features that set them apart from scenes? If so, what are those primitives like? What is the “alphabet” of visual features that can be combined to make a reachspace, and how does it differ from that of scenes?

Related publications: Josephs, E.L. & Konkle, T. (2019). Perceptual dissociations among views of objects, scenes, and reachable spaces. Journal of Experimental Psychology: Human Perception and Performance. preprint

 

What dimensions describe human judgments of reachable environments?

Decades of work have established the ways that objects can differ from each other: they can be animate or inanimate, big or small, manipulable or not, natural or manmade. So far, we don’t have such an understanding of reachable environments. What are the dimensions that distinguish one reachable environment  from another? How are these dimensions encoded in neural responses to reachspaces? How do they relate to visual or conceptual features of the environment?

 

Related publications: this work is ongoing, check back soon!