top of page

My Research

In my research, I use perception science, cognitive science and cognitive neuroscience to explore how we interact with the world around us, in both digital and analog environments. I have contributed to basic science advances in human perception, particularly the perception of reach-relevant spaces, and I enjoy tackling applied problems through a cognitive lens. Recently, this has included problems in stimulus detectability and visual search, such as deepfake detection by untrained observers or threat detection in radiology and safety screenings.

Perceiving the world at our fingertips

caricatureDemo.png

Imagine you are working on a computer, cutting a steak, painting a picture, or soldering a wire. In each of these activities, you are viewing a space within a few feet of the body, which contains multiple objects and affords hand-based actions. How are such near-scale environments represented in the brain and mind?  In my work, I have explored this question from many different perspectives. I have shown that understanding such reach-relevant spaces (or “reachspaces”) requires different perceptual processes than understanding single objects or navigable-scale spaces: these different kinds of visual input have different low-and mid-level statistics, and elicit brain activity in different brain regions. I have also probed the conceptual organization of our world at this scale, and found evidence that humans make distinctions between environments that are analog vs digital (e.g. desks vs control panels), between environments that support active engagement vs storage (e.g. desks vs shelves), and between food-related and non-food-related environments. In ongoing work, I am further exploring what factors underlie this distinction. Finally, I am the creator of the Reachspace Database, the first large-scale database of near-scale environments. 

 

Related work: 

  • Josephs, E.L. & Konkle, T. (2020). Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex. Proceedings of the National Academy of Sciences, 117(47)  29354-29362 author's copy.

  • Josephs, E.L. & Konkle, T. (2021, May). Emergent dimensions underlying human perception of the reachable world. Poster presented at the 21th annual meeting of the Vision Science Society. [pdf]

  • Josephs, E. L., Zhao, H., & Konkle, T. (2021). The world within reach: an image database of reach-relevant environments. Journal of Vision, 21(14), 1-11. pdf

Seeing is not believing: mitigating information transmission from deepfakes

Fake or manipulated video media (colloquially known as “deepfakes”) pose a clear threat to safety and wellbeing in online spaces. There has been a lot of research into building computer vision models that can detect when a video has been manipulated, but how do we warn the human user when a video has been flagged as fake? People have a tendency to believe their own eyes, so how can we best convince them that a realistic-looking video might not in fact be real? In ongoing work, I am currently working on developing and comparing different methods for alerting a human user when a video is fake, including creating "Caricatures" of the video - versions of the deepfake where distortions are amplified. With this human factors work, we hope to put recent computer vision advances to practical use in fighting misinformation transmission.

 

Related work: 

  • *Fosco, C. L., *Josephs, E. L., Andonian, A., Lee, A., Wang, X. & Oliva, A. (under review). Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machine. Asterix denote co-first author contributions. preprint

Searching and finding in a crowded world

 

The visual input to our system is rich, crowded and complex. How do we ever find what we are looking for, in an efficient, timely and reliable manner? This is a challenge faced in everyday settings, such as when you are searching for your car keys on the kitchen counter, but also in high-stakes situations, like when a radiologist searches medical images for anomalies, or a  baggage screener searches a bag for dangerous materials. In previous work, I have examined the conditions that help locate search targets, or detect differences among images. For example, I tested user interfaces (UIs) for the detection of minor, point differences between images, and found that UIs that rapidly alternate between images are better than those that present images side by side. I’ve also explored the consequences of performing a search for an object on downstream tasks like recog and recognition.

Related work: 

  • Josephs, E.L., Draschkow, D., Wolfe, J.M., Võ, M.L-H. (2016). Gist in time: scene semantics and structure enhance recall of searched objects. Acta Psychologica, 169. 100-108. pdf

 

  • Josephs, E.L., Drew, T., Wolfe, J.M. (2016). Shuffling your way out of change blindness. Psychonomic Bulletin and Review, 23(1), 193-200. pdf

 

  • Wolfe, J.M., Evans, K.K.,  Drew, T., Aizenman, A. A., Josephs, E. L. (2016). How do radiologists use the human search engine? Radiation protection dosimetry, 169(1-4), 24-31.

bottom of page