Spring 2018 Computational Methods — Final Projects

The goal of Computational Methods in Psychology and Neuroscience is to acquaint students with scientific computing, broadly speaking, but especially as it applies to psychology and neuroscience.

Even so, it attracts students from a pretty wide swath of majors. This year, in addition to psychology and neuroscience, we had majors from business, biology, as well as political and computer sciences.

Over the years we have used a variety of software in the course including Python, Matlab and Mathematica, as well as purpose-built environments like PsychoPy, freesurfer, ImageJ and others.

This year, we focused on Mathematica as it provides a rich set of tools and access to data and datasets without the sometimes painful management of packages and such1.


Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

Some of these projects are super ambitious for an introductory class, but the goal was learning and understanding the problem solving needed. Not so much minute implementation and theoretical details. Even if the problem wasn’t ‘solved’ in every case I feel like each individual / group now has a much better sense of what is possible and what is difficult2. In some cases, I implemented ‘helper’ code that is now part of the FPTools repository, but the ideas and final implementations are their own.

Here are this year’s projects. Please enjoy them —


Giant Asteroids Might Destroy Earth3

A computational simulation of asteroid impact with the planet earth, featuring animations and mortality rates.


Kids and Words

Linguistic analysis of conversations between kids and their parents.


Morality in Political Candidates

In the wake of the Facebook/Cambridge Analytica fiesta, a look at some crowdsourced (MTurk) questionnaire data about the personalities of political candidates. Machine-learning modeled candidate preferences based on interactive input.


Cartoon Face Recognition

The predominant implementation of ‘face finding’ algorithms doesn’t do a very good job with cartoon faces. This machine learning project sets out to rectify this oversight.


Name That Tune

Linguistic analysis from audio clips of songs? A huge project. Phonemes and classifiers and lyrics oh my!


Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and identify its behavior? (In cooperation with the Vision in Animals, Humans and Machines class.)


Primordial Soup

Delicious! Can we simulate the conditions of the creation of life’s building blocks (amino acids) ala the Miller-Urey experiment?


Get Your Axon

Can we teach a classifier to tell the difference between normal and malformed axons?


Cast of characters

Andres Beltre, George Chakalos, Jacob Chen, Jessica Cheng, Daniela Cossio, Allie Dinaburg, Izzy Fischer, Emil Ghitman Gilkes, Helen Gray-Bauer, Aimee Hall, Ryan Hill, Natasha Martinez, Zoe Michas, Annika Morrell, Laura Noejovich, Sarah Wilensky, Ray Yampolsky


  1. This is especially true with young scientists just dipping their toes into scientific computing. Even with some of the great package and environment management software out there some scientific computing environments can be too much. ↩︎
  2. On the first day of class, I type Sphere[]//Graphics3D into Mathematica and explain that, in 1983, I took 3 G/UG courses at OSU (CS 781–3) to get that to happen on a 320×240 pixel screen in roughly geological time. Then I shake my cane at them and tell them to get off my lawn. ↩︎
  3. People’s choice award ↩︎

Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation

Haptic and visual ‘contours’

It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture.

In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions– e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.

J. F. Norman, F. Phillips, J. R. Cheeseman, K. E. Thomason, C. Ronning, K. Behari, K. Kleinman, A. B. Calloway, and D. Lamirande, “Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation,” PLoS ONE, vol. 11, no. 2, p. e0149058, Feb. 2016.

doi:10.1371/journal.pone.0149058

Norman, Phillips et al. 2016

Haptic shape discrimination and interhemispheric communication

Touching.

In three experiments participants haptically discriminated object shape using unimanual (single hand explored two objects) and bimanual exploration (both hands were used, but each hand, left or right, explored a separate object). Such haptic exploration (one versus two hands) requires somatosensory processing in either only one or both cerebral hemispheres; previous studies related to the perception of shape/curvature found superior performance for unimanual exploration, indicating that shape comparison is more effective when only one hemisphere is utilized. The current results, obtained for naturally shaped solid objects (bell peppers, Capsicum annuum) and simple cylindrical surfaces demonstrate otherwise: bimanual haptic exploration can be as effective as unimanual exploration,showing that there is no necessary reduction in ability when haptic shape comparison requires interhemispheric communication.

We found that while successive bimanual exploration produced high shape discriminability, the participants’ bimanual performance deteriorated for simultaneous shape comparisons. This outcome suggests that either interhemispheric interference or the need to attend to multiple objects simultaneously reduces shape discrimination ability. The current results also reveal a significant effect of age: older adults’ shape discrimination abilities are moderately reduced relative to younger adults, regardless of how objects are manipulated (left hand only, right hand only, or bimanual exploration).

C. J. Dowell, J. F. Norman, J. R. Moment, L. M. Shain, H. F. Norman, F. Phillips, and A. M. L. Kappers, “Haptic shape discrimination and interhemispheric communication,” Sci. Rep., vol. 8, no. 1, pp. 1–10, Dec. 2017.

DOI:10.1038/s41598-017-18691-2

Norman et al. 2017

Creating noisy stimuli

So much noise.

A method for creating a variety of pseudo-random `noisy’ stimuli that possess several useful statistical and phenomenal features for psychophysical experimentation is outlined. These stimuli are derived from a pseudo-periodic function known as multidimensional noise. This class of function has the desirable property that it is periodic, defined on a fixed domain, is roughly symmetric, and is stochastic, yet consistent and repeatable. The stimuli that can be created from these functions have a controllable amount of complexity and self-similarity properties that are further useful when generating naturalistic looking objects and surfaces for investigation. The paper addresses the creation and manipulation of stimuli with the use of noise, including an overview of this particular implementation. Stimuli derived from these procedures have been used successfully in several shape and surface perception experiments and are presented here for use by others and further discussion as to their utility.

F. Phillips, “Creating noisy stimuli,” Perception, vol. 33, no. 7, pp. 837–854, 2004.

DOI:10.1068/p5141

Phillips 2004.

Perceptual representation of visible surfaces

What is a surface, anyway?

Two experiments are reported in which we examined the ability of observers to identify landmarks on surfaces from different vantage points. In Experiment 1, observers were asked to mark the local maxima and minima of surface depth, whereas in Experiment 2, they were asked to mark the ridges and valleys on a surface. In both experiments, the marked locations were consistent across different observers and remained reliably stable over different viewing directions. These findings indicate that randomly generated smooth surface patches contain perceptually salient landmarks that have a high degree of viewpoint invariance. Implications of these findings are considered for the recognition of smooth surface patches and for the depiction of such surfaces in line drawings.

Includes a handy differential geometry tutorial appendix.

F. Phillips, J. T. Todd, J. J. Koenderink, and A. M. L. Kappers, “Perceptual representation of visible surfaces,” Percept Psychophys, vol. 65, no. 5, pp. 747–762, Jul. 2003.

DOI:10.3758/BF03194811

Phillips et al. 2003

Complicated sports and movie watching

While I was on sabbatical in Gießen I was thrilled to have super (über?) fast internet and a nice television.

Unfortunately for me, the good folks at Apple / Netflix / HBO / etc, don’t want people in other countries to be able to easily access American™ feeds (and vice-versa of course, and I dig the whole ‘licensing’ thing, etc. Still, I paid for it and it would be nice to be able to access things I pay for when I am in places other than my usual places.)

So, what to do? Basically, set up a server back in America™, VPN into that thing, then try my best to convince the AppleTV in my apartment to access that feed. Turns out, this was more complicated than you’d think, since the AppleTV does bits of voodoo so that, even though it was connected to a VPN back in Saratoga, it still ‘knew’ it was in Germany. So basically my approaches involved various iOS devices, screen sharing, and voodoo strategies.

My first iteration looked like this:

(Note that I had improperly drawn the flag of Germany but unfortunately made it the flag of Belgium. I’m pretty sure they get that all the time… I erased it here.)

I wanted to watch Arsenal and, sure enough, whatever sport network my apartment building had was not really that ‘diverse’ in the sporting sense, so that’s how I did it. It was ridiculous.

So, to make it even more ridiculous, I went with version 2 here-

This required internet sharing on the laptop, hard wiring the AppleTV and using screen sharing only.

I’m sure neither of these work anymore, but found these Paper drawings I made to remind myself, and thought I should re-visit my insanity.

Gromitcam – in stereo

Back in 1997 I made a stereo webcam for keeping track of my dog, Gromit.

I was rolling through some archives and found an image from January 1998. It used two Logitech golf-ball cams and a bunch of ad hoc software. Pushed the frames to my machine at OSU Vision Laboratory. I’ll see if I can find some photos of the rig, but, I dare say it was one of the first stereo webcams ever. (Update: found some)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You can take the boy out of architecture school

rat palace

This is a rodent enclosure for behavioral experiments.

We printed the main cage, post retainers and lid with ABS from our uPrint and the ‘stay off the roof’ roof with our Formlabs in a nice, slick durable resin. There are 4 of these babies in Hassan’s lab, in our awesome 80/20 observation cages. All parametrically designed with OpenSCAD.

rat palace

 

 

 

Testre le embed das iframe

This is a test of an embedded iframe of fun and joy.

Please excuse anything annoying.

[advanced_iframe securitykey=”c8f174978f95886638babe6f19ad6e691566779b” src=”https://www.wolframcloud.com/objects/daf49e5e-2e03-4c1e-b1c4-b5750bbd5fdc?_view=frameless” width=”100%” height=”600″]

Most sincerely.

Bouncing Balls

A phase-space plot of height v. exaggeration for two different types of balls with two different coefficients of restitution.

Nothing important here, just a fun looking plot.