Unconscious inference in motion perception

Displays in experiment are situated half a meter and
    one meter from the participant. Under our model of how humans make estimate object motion out in the world, this
    should make speed measurments on the far screen less reliable even if the stimuli are the same retinal size.

People show systematic biases in perceived speeds and directions of motion under many conditions when objects are hard to seefor example, under low contrast conditions like driving in the fog or rain. Under these low contrast conditions, object speeds appear to move slower than they actually are in the world. This phenomenon has most commonly been explained as an example of unconscious perceptual inference: the brain is constantly building a model of object speeds in the world that is a combination of current speed information coming in from the eyes (weighted by uncertainty in that measurement) and expectations about speeds (based on previous experience of how often speeds are encountered in the world).

Another major component of these perceptual speed biases lies in the transformation of retinal speeds into world speeds. An ideal observer should perform speed inference in world-centered coordinates, which requires a transformation of the uncertainty in the retinal measurements into uncertainty in world speeds. The degree to which this uncertainty is scaled depends on distance and retinal eccentricity, which predicts that contrast-dependent speed biases should increase with viewing distance.

Published reports

  1. Manning TS, Pillow JW, Rokers B, Cooper EA. (2022) Humans make non-ideal inferences about world motion. Journal of Vision 2022;22(14):4054. doi: 10.1167/jov.22.14.4054.
  2. Manning TS, Pillow JW, Rokers B, Cooper EA. (2023) Contrast-dependent speed biases are distance-dependent too. In Prep.

Code repositories

Optimal neural codes for binocular disparity

Top: a map of binocular disparities in a natural image.
    Middle: disparity histograms within the central 10 degrees of vision during navigation and food preparation tasks. Bottom:
    Fisher information in three populations of neurons in the brain that encode disparity.

Binocular disparitythe difference between where the same point in space falls on the two eyesis a useful visual cue for creating a vivid percept of the world in 3D. How does the brain encode this cue though to maximize its usefulness for vision? In this collaborative project with Dr. Emma Alexander, Dr. Gregory DeAngelis, and Dr. Xin Huang we use a combination of image statistics and information theoretic analysis of neuronal responses to disparity in different brain areas to answer this question.

In short, there is a balancing act in the brain between: 1) the precision with which neurons in the brain can represent a physical variable, 2) the amount of metabolic resources a neural population requires, and 3) the loss function used to shape the population's selectivity (e.g. should the brain maximize sensory signal fidelity or discriminability?). We believe that there is a fundamental change in the loss functions guiding sensory encoding as one ascends the roughly hierarchical network of the primate visual system.

Published reports

  1. Manning TS, Alexander E, Cumming BG, DeAngelis GC, Huang X, Cooper EA. (2023) Transformations of sensory information in the brain reflect a changing definition of optimality. Preprint.
  2. Manning TS, Alexander E, DeAngelis GC, Huang X, Cooper EA. (2021) Role of MT Disparity Tuning Biases in Figure-Ground Segregation. Poster presented at Society for Neuroscience, Chicago, IL.

Code repositories

Improved methods for estimating perceptual priors

xxx

The goal of psychophysics is to define a lawful relationship between externally quantifiable aspects of the world (like luminance, distance, mass, etc.) and our perception of the world based on noisy measurements from our biological sensors. This problem has direct analogs in signal detection, computer vision, etc. and as such, human psychophysics research has used a lot of the same computational methods for modeling the brain with ideal observer frameworks. One aspect of perception that remains less well defined is the set of assumptions the brain makes while interpreting these sensory inputsthese assumptions are commonly called perceptual priors. These assumptions are thought to underlie the systematic biases that people exhibit when making judgments about things like the angles of contours or the speed of objects or their own self-motion.

Perceptual priors are often assumed to represent the brain learning statistical regularities in the environment (e.g. the distribution of contour orientations or object speeds). It remains an open question how well our priors capture these statistics (as measured by head-mounted cameras, etc.) and how much these priors vary between individuals. One issue in resolving this question is the limitations of the mathematical tools used to estimate them. Researchers have either nonparametrically derived priors or define them with parametric functions like a Gaussian or piecewise linear functions. This project had two main goals: 1) to define the accuracy and precision with which we estimate priors (so we can asses how much they vary between people and how well they match stimulus statistics) and 2) adapt existing statistical methods from other fields for use in perceptual science so we are less constrained by the range of possible priors we could faithfully estimate.

Published reports

  1. Manning TS, Naecker BN, McLean IR, Rokers B, Pillow JW, Cooper EA. (2022) A general framework for inferring Bayesian ideal observer models from psychophysical data. eNeuro. doi: 10.1523/ENEURO.0144-22.2022.
  2. Manning TS, McLean IR, Naecker BN, Pillow JW, Rokers B, Cooper EA. (2021) Estimating perceptual priors with finite experiments. Journal of Vision 21 (9), 2215-2215. doi: 10.1167/jov.21.9.2215.

Code repositories

  • Bayesian ideal observer framework [Code]
  • Limits of perceptual prior estimates [Code]

Extracting heading from retinal images during eye movements

Self-motion can be simulated with random dot motion that forms an optic flow pattern.
    When you rotate your eyes to pursue a moving object, that distorts the flow pattern (making it consistent with a curved trajectory),
    but the brain somehow undoes these distortions and we percieve our correct heading.

The primate visual system has evolved to be highly sensitive to optic flow cuescomplex patterns of motion that arise from an observer moving through the worldto create a vivid percept of self-motion. At the same time, we have also have highly motile eyes that constantly select new aspects of the visual scene on which to focus the high-resolution fovea. These eye movements produce additional motion on the retina that distorts the optic flow patterns arising from locomotion, potentially giving the visual system incorrect information about heading. However, we manage just fine navigating the world, the brain must have a way to extract heading information from these distorted images.

This extraction process could use a forward model of the expected distortions from an impending eye movement (i.e. based on a extraretinal mechanism) or may simply extract heading-related components of the moving image based on differences in motion patterns caused by eye movements and locomotion (i.e. based on a retinal mechanism). We investigated which of these two theories was best supported by recording from heading-sensitive neurons in the macaque monkey brain while the monkeys tracked targets with their eyes that moved in front of an optic flow stimulus.

Published reports

  1. Manning TS, Britten KH. (2019) Retinal stabilization reveals limited influence of extraretinal signals on heading tuning in the medial superior temporal area. J Neurosci. 39 (41) 8064-8078. doi: 10.1523/JNEUROSCI.0388-19.2019.
  2. Manning TS. (2019) Neural Mechanisms of Smooth Pursuit Compensation in Heading Perception. University of California, Davis ProQuest Dissertations Publishing. 13882251.

Code repositories