VSS 2018 – Shape scission: causal segmentation of shape

Warts, cracks, causal shape segmentation.

Research on shape perception usually focuses on the estimation of local surface geometry through cues like stereopsis, shading, or texture. Here, we argue that observers use these shape estimates to infer other object properties such as material composition and the transformation processes that generated the observed shape from this matter. We call this separation of object shape into intrinsic and extrinsic object properties shape scission. We investigated shape scission in a series of experiments with different groups of participants responding to a set of 8 unfamiliar rendered objects, each transformed by 8 transformations (e.g., “melted”, “cut”, or “inflated”). Importantly, participants could never directly compare the transformed and untransformed versions of objects.

Read More

VSS 2018 – Gravity and Ground Plane Geometry in Perspective Images

“The small things float to the top of… gravity” — Rickie Lee Jones

Renaissance artists noticed that placing objects on a visible ground plane anchors them stably, making it easy to perceive their depth. Subsequently, they developed methods for geometric calculation of perspective by drawing construction lines defining the ground plane. Thus, the artist constructs the geometry of pictorial space, based on the station point and the view direction, then places objects in it. This practice created a scale for placing and sizing objects on a minimally patterned ground plane.

Read More

VSS 2018 – Exploring the Uncanny Valley

Balls! Bouncing balls. This is science so we gotta start somewhere.

Mori’s Uncanny Valley phenomena isn’t limited to robotics. It has been observed in many other areas, including the fine arts, especially photorealistic painting, sculpture, computer graphics, and animation. Furthermore, heuristic practices in the arts, especially those of traditional animation, have much to offer to our understanding of the appearance of phenomenological reality. One interesting example is the use of exaggeration to mitigate un- canny valley phenomena in animation. Action captured from live performance is frequently exaggerated so as to appear less uncanny.

Read More

VSS…

I’ve got strong last-minute game.

Editing my VSS stuff on the Uncanny Valley. I’ll post something here when I get back.

Read More

Fechner’s Aesthetics Revisited

Isn’t it beautiful?

Gustav Fechner is widely respected as a founding father of experimental psychology and psychophysics but fewer know of his interests and work in empirical aesthetics. In the later 1800s, toward the end of his career, Fechner performed experiments to empirically evaluate the beauty of rectangles, hypothesizing that the preferred shape would closely match that of the so-called ‘golden rectangle’. His findings confirmed his suspicions, but in the intervening decades there has been significant evidence pointing away from that finding. Regardless of the results of this one study, Fechner ushered in the notion of using a metric to evaluate beauty in a psychophysical way. In this paper, we recreate the experiment using more naturalistic stimuli. We evaluate subjects’ preferences against models that use various types of object complexity as metrics.

Read More

Combinational Imaging: Magnetic Resonance Imaging and EEG Displayed Simultaneously

Before fMRI (Functional Magnetic Resonance Imaging) existed, I got to do this.

We were one of the first labs to do this pre ‘functional’ functional imaging. Instead of volume rendering (which I would move on to Pixar to work on with Bob Drebin, Pat Hanrahan, and Loren Carpenter) we made surfaces for everything.

Abstract: We report on a new technique to combine two technologies [magnetic resonance imaging (MRI) and topographic imaging of EEG] to produce an overlapping image of both scalp-recorded EEG and the underlying brain anatomy within a given subject. High-resolution-graphics postprocessing of these data was used to create this integrated image.

M. W. Torello, F. Phillips, W. W. Hunter Jr., and C. A. Csuri, “Combinational Imaging: Magnetic Resonance Imaging and EEG Displayed Simultaneously,” Journal of Clinical Neurophysiology, vol. 4, no. 3, pp. 274–293, Jul. 1987.

DOI: 10.1097/00004691-198707000-00007

 Torello et al., 1987

The perception of surface orientation from multiple sources of optical information

The first piece of work in did in the “Todd Lab” at OSU.

I had just come off five years at Pixar and a year back in grad school in the Architecture and Planning department. I wrote most of the code for this for making and displaying objects, the interactive ‘gauge figure’ and the like. Farley and Jim came up with the distortion method (these are notoriously Farley’s “potatoes” as compared to my “glavens”. Potato, potato), and Farley and I implemented it. I wrote the gauge figure stuff during a visit with Jan Koenderink, whose book Alvy Ray Smith recommended I look at while back @ Pixar. Crazy.

Abstract: An orientation matching task was used to evaluate observers’ sensitivity to local surface orienta- tion at designated probe points on randomly shaped 3-D objects that were optically defined by tex- ture, lambertian shading, or specular highlights, These surfaces could be stationary or in motion, and they could be viewed either monocularly or stereoscopically, in all possible combinations. It was found that the deformations of shading and/or highlights (either over time or between the two eyes’ views) produced levels of performance similar to those obtained for the optical deformations of tex- tured surfaces. These findings suggest that the human visual system utilizes a much richer array of optical information to support its perception of shape than is typically appreciated.

J. F. Norman, J. T. Todd, and F. Phillips, “The perception of surface orientation from multiple sources of optical information,” Percept Psychophys, vol. 57, no. 5, pp. 629–636, Jul. 1995.

Norman, Todd & Phillips, 1995.

Spring 2018 Vision in Animals, Humans and Machines — Final Projects

Vision in Humans, Animals and Machines is a seminar / hands-on course where we engage in a sort of comparative neuroscience with respect to how organic and inorganic systems ‘see’.

Some things are hard for animals, some things are easy. The same can be said for machines. The exhaustively deployed aphorism — “Easy things are hard for computers, hard things are easy for computers” reminds us that, the way ‘computer vision’ works probably doesn’t have all that much in common with how living organisms do1.

One of the best ways to observe this is to probe situations where each type of system fails to work. In this class, we learned about biological mechanisms of vision as well as computational analogs. We tried to ‘break’ computer vision systems in systematic ways and analyzed the results.


Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

The projects had to use computational methods to implement some function or malfunction of the visual system. There was some overlap between this class and the Computational Methods class, so therefore there was a lot of Mathematica used, along with some Lego MindStorms2.

Here are this year’s projects. Please enjoy them —


Synthetic Beings, Evolutionary Vision

A genetic-algorithm driven method of generating and evolving synthetic beings with different perceptual abilities in an ever changing environment.


Modeling Color Vision in Animals

A cross-species look at animals with as few as 1 and as many as 11 color receptors. Using multispectral images and banks of cone response functions and illuminations can we predict the organism’s ability to ‘see’ certain features?


Robotic Model of Simple Vision

Lego robotics, Euglena.


Modeling Prosopagnosia3

Can we make a machine learning based face recognizer ‘face-blind’?


Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and use machine learning identify its behavior? (In cooperation with the Computational Methods class.)


Cast of Characters

Zachariah Arnold, Iman Bays, Sierra Carlen, George Chakalos, Jessica Cheng, Daniela Cossio, Allison Dalton, Seeley Fancher, Sara Fontana, Rachel Greene, Julia Howe, Donna Nguyen, Jeffrey Okoro, Reece Robinson, Anthony Song, Henry Stadler, Megan Volkert, Xueying Wu.


  1. This is, of course, fine. ↩︎
  2. Remind me to tell you the story of visiting Mitch Resnick’s lab at MIT, back while I was working at Pixar, and playing with the OG LegoLogo blocks and wires and things. ↩︎
  3. People’s choice award winner. ↩︎

Spring 2018 Computational Methods — Final Projects

The goal of Computational Methods in Psychology and Neuroscience is to acquaint students with scientific computing, broadly speaking, but especially as it applies to psychology and neuroscience.

Even so, it attracts students from a pretty wide swath of majors. This year, in addition to psychology and neuroscience, we had majors from business, biology, as well as political and computer sciences.

Over the years we have used a variety of software in the course including Python, Matlab and Mathematica, as well as purpose-built environments like PsychoPy, freesurfer, ImageJ and others.

This year, we focused on Mathematica as it provides a rich set of tools and access to data and datasets without the sometimes painful management of packages and such1.


Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

Some of these projects are super ambitious for an introductory class, but the goal was learning and understanding the problem solving needed. Not so much minute implementation and theoretical details. Even if the problem wasn’t ‘solved’ in every case I feel like each individual / group now has a much better sense of what is possible and what is difficult2. In some cases, I implemented ‘helper’ code that is now part of the FPTools repository, but the ideas and final implementations are their own.

Here are this year’s projects. Please enjoy them —


Giant Asteroids Might Destroy Earth3

A computational simulation of asteroid impact with the planet earth, featuring animations and mortality rates.


Kids and Words

Linguistic analysis of conversations between kids and their parents.


Morality in Political Candidates

In the wake of the Facebook/Cambridge Analytica fiesta, a look at some crowdsourced (MTurk) questionnaire data about the personalities of political candidates. Machine-learning modeled candidate preferences based on interactive input.


Cartoon Face Recognition

The predominant implementation of ‘face finding’ algorithms doesn’t do a very good job with cartoon faces. This machine learning project sets out to rectify this oversight.


Name That Tune

Linguistic analysis from audio clips of songs? A huge project. Phonemes and classifiers and lyrics oh my!


Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and identify its behavior? (In cooperation with the Vision in Animals, Humans and Machines class.)


Primordial Soup

Delicious! Can we simulate the conditions of the creation of life’s building blocks (amino acids) ala the Miller-Urey experiment?


Get Your Axon

Can we teach a classifier to tell the difference between normal and malformed axons?


Cast of characters

Andres Beltre, George Chakalos, Jacob Chen, Jessica Cheng, Daniela Cossio, Allie Dinaburg, Izzy Fischer, Emil Ghitman Gilkes, Helen Gray-Bauer, Aimee Hall, Ryan Hill, Natasha Martinez, Zoe Michas, Annika Morrell, Laura Noejovich, Sarah Wilensky, Ray Yampolsky


  1. This is especially true with young scientists just dipping their toes into scientific computing. Even with some of the great package and environment management software out there some scientific computing environments can be too much. ↩︎
  2. On the first day of class, I type Sphere[]//Graphics3D into Mathematica and explain that, in 1983, I took 3 G/UG courses at OSU (CS 781–3) to get that to happen on a 320×240 pixel screen in roughly geological time. Then I shake my cane at them and tell them to get off my lawn. ↩︎
  3. People’s choice award ↩︎

Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation

Haptic and visual ‘contours’

It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture.

In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions– e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.

J. F. Norman, F. Phillips, J. R. Cheeseman, K. E. Thomason, C. Ronning, K. Behari, K. Kleinman, A. B. Calloway, and D. Lamirande, “Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation,” PLoS ONE, vol. 11, no. 2, p. e0149058, Feb. 2016.

doi:10.1371/journal.pone.0149058

Norman, Phillips et al. 2016