Why Does the Cortex Reorganize After Sensory Loss?

Investigating the Teleology of Cortical Reorganization
A growing body of evidence demonstrates that the brain can reorganize dramatically following sensory loss. Although the existence of such neuroplastic crossmodal changes is not in doubt, the functional significance of these changes remains unclear. The dominant belief is that reorganization is compensatory. However, results thus far do not unequivocally indicate that sensory deprivation results in markedly enhanced abilities in other senses. Here, we consider alternative reasons besides sensory compensation that might drive the brain to reorganize after sensory loss. One such possibility is that the cortex reorganizes not to confer functional benefits, but to avoid undesirable physiological consequences of sensory deafferentation. Empirical assessment of the validity of this and other possibilities defines a rich program for future research.
Read MoreRetina and LGN

A demonstration written in Mathematica to show the interaction of the retina and LGN on the sampling of image information from the eyes.
You can play with the receptive field size to see how the whole “Hybrid Images” thing works. Click on the image to go to the demonstration.
Read MoreVSS 2018 – Gravity and Ground Plane Geometry in Perspective Images

“The small things float to the top of… gravity” — Rickie Lee Jones
Renaissance artists noticed that placing objects on a visible ground plane anchors them stably, making it easy to perceive their depth. Subsequently, they developed methods for geometric calculation of perspective by drawing construction lines defining the ground plane. Thus, the artist constructs the geometry of pictorial space, based on the station point and the view direction, then places objects in it. This practice created a scale for placing and sizing objects on a minimally patterned ground plane.
Read MoreVSS 2018 – Exploring the Uncanny Valley

Balls! Bouncing balls. This is science so we gotta start somewhere.
Mori’s Uncanny Valley phenomena isn’t limited to robotics. It has been observed in many other areas, including the fine arts, especially photorealistic painting, sculpture, computer graphics, and animation. Furthermore, heuristic practices in the arts, especially those of traditional animation, have much to offer to our understanding of the appearance of phenomenological reality. One interesting example is the use of exaggeration to mitigate un- canny valley phenomena in animation. Action captured from live performance is frequently exaggerated so as to appear less uncanny.
Read MoreFechner’s Aesthetics Revisited

Isn’t it beautiful?
Gustav Fechner is widely respected as a founding father of experimental psychology and psychophysics but fewer know of his interests and work in empirical aesthetics. In the later 1800s, toward the end of his career, Fechner performed experiments to empirically evaluate the beauty of rectangles, hypothesizing that the preferred shape would closely match that of the so-called ‘golden rectangle’. His findings confirmed his suspicions, but in the intervening decades there has been significant evidence pointing away from that finding. Regardless of the results of this one study, Fechner ushered in the notion of using a metric to evaluate beauty in a psychophysical way. In this paper, we recreate the experiment using more naturalistic stimuli. We evaluate subjects’ preferences against models that use various types of object complexity as metrics.
Read MoreCombinational Imaging: Magnetic Resonance Imaging and EEG Displayed Simultaneously

Before fMRI (Functional Magnetic Resonance Imaging) existed, I got to do this.
We were one of the first labs to do this pre ‘functional’ functional imaging. Instead of volume rendering (which I would move on to Pixar to work on with Bob Drebin, Pat Hanrahan, and Loren Carpenter) we made surfaces for everything.
Abstract: We report on a new technique to combine two technologies [magnetic resonance imaging (MRI) and topographic imaging of EEG] to produce an overlapping image of both scalp-recorded EEG and the underlying brain anatomy within a given subject. High-resolution-graphics postprocessing of these data was used to create this integrated image.
M. W. Torello, F. Phillips, W. W. Hunter Jr., and C. A. Csuri, “Combinational Imaging: Magnetic Resonance Imaging and EEG Displayed Simultaneously,” Journal of Clinical Neurophysiology, vol. 4, no. 3, pp. 274–293, Jul. 1987.
DOI: 10.1097/00004691-198707000-00007
The perception of surface orientation from multiple sources of optical information

The first piece of work in did in the “Todd Lab” at OSU.
I had just come off five years at Pixar and a year back in grad school in the Architecture and Planning department. I wrote most of the code for this for making and displaying objects, the interactive ‘gauge figure’ and the like. Farley and Jim came up with the distortion method (these are notoriously Farley’s “potatoes” as compared to my “glavens”. Potato, potato), and Farley and I implemented it. I wrote the gauge figure stuff during a visit with Jan Koenderink, whose book Alvy Ray Smith recommended I look at while back @ Pixar. Crazy.
Abstract: An orientation matching task was used to evaluate observers’ sensitivity to local surface orienta- tion at designated probe points on randomly shaped 3-D objects that were optically defined by tex- ture, lambertian shading, or specular highlights, These surfaces could be stationary or in motion, and they could be viewed either monocularly or stereoscopically, in all possible combinations. It was found that the deformations of shading and/or highlights (either over time or between the two eyes’ views) produced levels of performance similar to those obtained for the optical deformations of tex- tured surfaces. These findings suggest that the human visual system utilizes a much richer array of optical information to support its perception of shape than is typically appreciated.
J. F. Norman, J. T. Todd, and F. Phillips, “The perception of surface orientation from multiple sources of optical information,” Percept Psychophys, vol. 57, no. 5, pp. 629–636, Jul. 1995.
Spring 2018 Vision in Animals, Humans and Machines — Final Projects

Vision in Humans, Animals and Machines is a seminar / hands-on course where we engage in a sort of comparative neuroscience with respect to how organic and inorganic systems ‘see’.
Some things are hard for animals, some things are easy. The same can be said for machines. The exhaustively deployed aphorism — “Easy things are hard for computers, hard things are easy for computers” reminds us that, the way ‘computer vision’ works probably doesn’t have all that much in common with how living organisms do1.
One of the best ways to observe this is to probe situations where each type of system fails to work. In this class, we learned about biological mechanisms of vision as well as computational analogs. We tried to ‘break’ computer vision systems in systematic ways and analyzed the results.
Final Projects
This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).
The projects had to use computational methods to implement some function or malfunction of the visual system. There was some overlap between this class and the Computational Methods class, so therefore there was a lot of Mathematica used, along with some Lego MindStorms2.
Here are this year’s projects. Please enjoy them —
Synthetic Beings, Evolutionary Vision
A genetic-algorithm driven method of generating and evolving synthetic beings with different perceptual abilities in an ever changing environment.

Modeling Color Vision in Animals
A cross-species look at animals with as few as 1 and as many as 11 color receptors. Using multispectral images and banks of cone response functions and illuminations can we predict the organism’s ability to ‘see’ certain features?

Robotic Model of Simple Vision
Lego robotics, Euglena.

Modeling Prosopagnosia3
Can we make a machine learning based face recognizer ‘face-blind’?

Tracking Rats
Can we make a machine-vision system that can track a rat in a socialization apparatus and use machine learning identify its behavior? (In cooperation with the Computational Methods class.)

Cast of Characters
Zachariah Arnold, Iman Bays, Sierra Carlen, George Chakalos, Jessica Cheng, Daniela Cossio, Allison Dalton, Seeley Fancher, Sara Fontana, Rachel Greene, Julia Howe, Donna Nguyen, Jeffrey Okoro, Reece Robinson, Anthony Song, Henry Stadler, Megan Volkert, Xueying Wu.