Dangerous awards

award title slide

A visit to Champaign, a return flight with a ‘possible weapon’.

I went to the 2019 Wolfram Technical Conference last week in snowy Champaign Illinois. I gave a talk on some stuff we were doing in our classes that used the Wolfram Language, some Arduino hardware, some big data from fruits. It was fun – though I had the dreaded last talk on the last day, so a big thanks to the folks who stuck around.

It was during happy hour, so I thought I should maybe have bought everyone drinks?

I got to co-MC the Live Coding Championship —

which, if you’re clever and bored enough, you can find on Twitch somewhere. I made a ‘waning gibbous’ joke that I probably should take back. Gerli dominated #girlswhocode, it was really great fun.

me wearing the wrestling belt

I also, a little unexpectedly, was the recipient of a Wolfram Innovation Award —

which was super nice and a great honor. I now have one ‘moment of prestige’ in common with Nassim Nicholas Taleb. That and, on my deathbed, I will receive total consciousness.

So I got that goin’ for me, which is nice.

On the way through TSA at the Champaign-Urbana airport (Sign: “TSA is out, but will be back at 9:00AM”), my bags were searched because of ‘something dangerous’ — AKA the award. The agent congratulated me though, making the dirty-underwear and t-shirt dishevelment of my bags a little more bearable.

On that fight, I sat next to wheelchair racer Jenna Fesemyer, a fellow Ohioan, who was on her way to the NYC Marathon (where she placed 4th/7th in US/Internationally!). I bothered her with endless conversation about bike technology. She humored me, which was also nice.

All in all, and despite all the ‘stuff’ I had to do, it was really a lot of fun and I came back not so much tired1 I did have to chainsaw up a big ol’ tree that fell down in the wind. So that was more tiring. But — I’m a recreational lumberjack now. but rejuvenated and inspired.

Apples and Oranges

colorimeter device

A talk I gave at the Wolfram Technology Conference, 2019.

In our “Computational Methods for Psychology and Neuroscience” course, we teach undergraduate students the fundamentals of computational thinking (as opposed to traditional “programming”) using a project-based approach. Over the years project topics have ranged from linguistics, video image analysis, Dynamic[] driven data collection, analysis and presentation, machine learning, and beyond. Most recently, we chose colorimetry and psychophysics as our project theme. Using the Connected Devices framework and an Arduino for data collection, we build a machine learning model from publicly available hyperspectral data that could reliably discriminate fruit types from simple, low-dimensional spectral scans.

The resulting project was well received by students and covered a broad range of topics that are useful in neuroscience including: procedural programming of the Arduino, basic electronics, sensor based data acquisition, functional programming in Wolfram Language, instrument calibration, analysis, visualization, and machine learning. Here we discuss the various challenges and successes in this 15-week class.

Here’s a link to the presentation.

And here’s a version of it rendered from the Wolfram Cloud –

Two recent submissions

So many things on the burner.

First – The Veiled Virgin paper has been submitted. We didn’t put it on a preprint server yet, we’re trying to figure out what we want to do there. If you’d like a copy of the preprint, email me.

Second – The material perception paper has been put up on arXiv. Go check it out at https://arxiv.org/abs/1908.00902 – if you dare! Let me know what you think.

So, about RIT

Me in front of the RIT Magic Center

Time for a change of venue.

A long time ago I left Pixar and went back to graduate school because I was interested in understanding more about the creative stuff we did — How the tools facilitated, constrained, and otherwise ‘stereotyped’ what was created.

I’m still interested in that stuff.

I got in at the two places I applied — OSU, back in the Architecture and City Planning department of my undergraduate misadventures, and MIT in the Media Lab. After a whole bunch of soul searching, I ended up back at Ohio State1The fact that the grad student stipend was the same in Cambridge Mass and Columbus Ohio had a little bit to do with it, but it was more complicated of a decision than that. While I was there, I met a guy in an early morning27:30AM. Breakfast with Lester Krueger. vision science class, Vic Perotti3Who, with amazing coincidence, is presently on the faculty at RIT, who was doing vision research — using computer graphics! He introduced me to his advisor, Jim Todd, and I abandoned architecture school a second time for the siren call of the pixel.

I’ve spent the past 21 years in Saratoga Springs — just a beautiful place — studying and doing a lot of teaching about human perception, mostly vision. Skidmore has been a great place to be faculty. My niggles are trivial and my complaints few. I have great colleagues, made a few friends, and taught some great students. My family loves it here too.

After an extended courtship, RIT won me over with the opportunity to research and teach the shiny things that distracted me away from Pixar in the first place. I will be a ‘Professor of Motion Picture Science’4Now that is a $5 title! I’ll take it.at the newly established MAGIC center.

Thirty years ago, the opportunity to do this sort of stuff in an academic setting was pretty limited. In grad school I did a little work at ACCAD which was what the Computer Graphics Research Group5Where I spent the last 2+ years of my undergraduate years. had transmogrified into as Chuck Csuri retired. This was a time of great growth and hiring in the graphics and entertainment universe. Almost every graduate program on earth had to bend itself to the pragmatics of training folks for that scene. I’m OK with training, but I prefer teaching. I like learning but prefer researching. That’s why my time at Skidmore doing basic vision research was so rewarding.

Now I get to teach and research what I really wanted to do a few decades back. I get to do it in an exciting new center focused on real interdisciplinarity, along side a whole armada of similarly-disposed colleagues and students — Old friends from grad school, new friends from my sabbatical a few years back, a Skidmore parent who helped convince me to come visit RIT in the first place, and a student from my lab who ended up at RIT and is now professing himself.

I can’t wait.

Temporal consequences of spatial acuity reduction

Space-time visual insanity.

Some work I collaborated on concerning spatiotemporal vision. We have some very interesting findings hinted at in the ‘Puzzles’ section that I look forward to us getting out there.

Temporal consequences of spatial acuity reduction

Pawan Sinha, Sid Diamond, Frank Thorn, Jie Ye, Flip Phillips, Sharon Giliad-Gutnick, Shlomit Ben-Ami and Sruti Raja – MIT Brain and Cognitive Sciences, Wenshou Medical College, Skidmore College Psychology and Neuroscience.

Various eye conditions, such as cataracts and refractive errors, induce spatial blur in the retinal image. This, by definition, reduces high spatial frequency content. How, if at all, does this impact the temporal structure of the visual input? What are the implications of any such spatio-temporal linkage?

Effects of the Spatial Spectrum of Illumination on Material Perception

A little VSS 2019 fun with some old friends. Old as in length of time we’ve all known each other, not in geologic age.

Like all good science, things changed a little between abstract submission and the actual stuff we’ll present. This is mainly about the re-modeling of material properties by manipulating the global illumination. We have other experiments that cover the stuff in the abstract, but it just seemed to me like too much stuff to put in one poster. You’d be there for an hour while we walked through it. Also, as a direct challenge from a colleague who noticed that I used the most-words-ever on last year’s poster, I went totally minimal here.

Effects of the Spatial Spectrum on the Perception of Reflective and Refractive Materials

Flip Phillips, J Farley Norman, and James Todd – Skidmore, WKU, OSU

Highly reflective and refractive materials such as gemstones, polished metals, shimmering water, glazed ceramics and the like, act as touchstones of visual wonder for humans. While this might simply be indicative of a “sparkly good!” mechanism of prehistoric origin, the question remains how the human visual system uses this information to identify materials. Since the 15th century, painters (e.g., van Eyck, Heda, Claesz) have been acutely aware of the depiction of these materials. Even contemporary comic illustrators make it a priority to depict this phenomenology via denotative mechanisms like ‘lucaflection’ (Mort Walker).
It is intuitively tempting to assign the heavy lifting of material perception to the specularity of the material. Indeed, transparency and translucency seem to be special cases of our day-to-day experiences with materials — the vast majority of which that seem relatively opaque. However they are frequently not as opaque as they may seem (grapes, for example) and even those that are completely so still have sub-surface interactions with light that make for complicated depiction.
In a series of experiments we show that the spatial composition of the illuminating environment has a strong effect on material perception of non-trivial objects made from ostensibly opaque materials. Broad (i.e., low-frequency dominant) fields of illumination result in fiducially black materials to be perceived as ‘metal’ while sparse fields (small, isolated high frequency information) biased perception of metal toward ‘black plastic’. Preliminary work with transparent and translucent materials suggests the same mechanisms may be at work — The structure of refracted environmental information plays an even more significant role than that of the specular highlights. Finally, multi-scale analysis of the illumination environment shows clustering more consistent with the empirical perceptual impressions of the surface than with the actual surface material.

Shiny things. Global illumination. Spherical harmonics. You know, for the kids.

Objects, Materials, Exaggeration, and Perception

For a talk @ the ASU SciHub SciAPP Workshop on Science, the Arts & Possibilities in Perception.

It is tempting to think of perception as some form of physical measurement. Indeed, animals seem to act as if they are constantly using their sensory systems to quantify their world — Distances before jumping, colors before eating, trajectories for catching, and so forth. Similarly, as much as we fetishize the ‘brain as computer’ metaphor, it isn’t 100% clear that, beyond some extremely simple analogs, the brain does anything resembling digital computation. Does an animal’s perception and action depend on range finders, spectrophotometers, thermometers and the like for input? Do we compute with this input and use it to drive servo-like motor operations? If not, then what is a plausible alternative?

This talk will outline some of the ways in which the human visual system is relatively unconcerned with accurate or even plausible physical mensuration. Specific to this meeting’s aims — producers of visual media have been aware, at least tacitly, of this insensitivity since the earliest production of images. This rich (but sometimes ‘secret’) font of heuristic information can act as inspiration for understanding our perception of the visual world.

For example, painters know that a geometrically and photometrically correct projection of the world onto an image plane is mostly immaterial to our ability to understand an image. Animators know that exaggerating motion in just the right ways makes it look more realistic. Sculptors create striking diaphanous objects using dense and opaque materials. We will show examples and empirical investigation into this phenomenological psychophysical universe.

Arduino Spectrophotometry

Well, OK, just measuring ambient light for now… but we’ll get there in class soon.

Students in my Computational Methods class are using an Arduino to do some simple sensor measurement stuff. I found a bunch of old FSRs and photoresistors in a bin in the lab. No markings so no datasheets.

No worries –

A little Mathematica “Connected Device” stuff to talk to the Arduino, my interface to Argyll CMS to talk to the i1 Pro, and adjusting the blinds in my office …

DeviceReadTimeSeries + getAmbient = profit$

Pretty linear… We’re ready to roll.

TAPASI – Monkeys and Microstimulation

I’m not sure if ‘A’ and ‘D’ are smoking 

Some monkeys and some microstimulation.

Courtesy of Gabe Diaz, a little scene from Graziano, Taylor and Moore, 2002. Dots indicate trajectory of the left arm after microstimulation of the right hemisphere PFC. Remember kids – PFC does everything!

But seriously, it’s a neat paper and there are more fun images, more than can possibly fit in The Archives™

Wasting Time – TheraPutty

Yes, so many other things I could be doing, but this is more important – using a webcam, some Mathematica, and therapy putty.

Somehow, as I age, I keep accidentally hurting myself. I know, weird right? So, while doing some he-man building things (putting a keyboard tray on my standing desk qualifies, right?) I accidentally wrenched my elbow while using a drill-driver, resulting in so-called “golf elbow” (similar to its tennis cousin, just the other tendon). 

Beth had some TheraPutty and I noticed that doing certain exercises with it seemed to help. That and Ibuprofen I suppose. I took two tubs, blue – stiff, and yellow – soft, to the lab to fondle while working. 

After about a week, I noticed that the blue, which was supposed to be super resistant, had softened to be almost as soft as the yellow. It was pretty strange: was I really ‘working it’ that much that I was able to break down seriously viscous rubber and other miscellaneous plasticizers with my bare brutish hands?

I fired off an email to the company that manufactures it, Fabrication Enterprises, Inc, down in White Plains. I wasn’t really expecting a response, I was just curious. 

I had my Ziggi camera sitting on my desk since I had just finished teaching. So, I decided to roll up the yellow and blue and do a time lapse of them ‘settling’ since that would be a relatively good estimate of their relative viscosity. A little Mathematica image processing to find the individual balls in the image, segment and measure them, and I ended up the following:

Procrastination, quantified.

Sure enough, they spread out at almost exactly the same rate (the little glitch out around 5 minutes is where I had to re-start the image acquisition because, well, I really wasn’t being a good scientist, was I?). And pretty much ended up at exactly the same size after about 5-6 minutes. I wanted to go home and eat dinner, so I stopped the experiment — science has to eat.

Lo and behold, today I got a very nice email from Jason Drucker, Senior Vice President of Sales and Marketing at FEI. He apologized for taking time to get back to me, but, as it turned out he had sent my question to the senior manufacturing chemist for comment! That’s wonderfully nuts- and indeed, she/he replied:

I have never heard of the blue putty getting soft after using it for a long period of time. In fact, so soft that the yellow is firmer in viscosity. The only way this can happen is the putty was exposed to one of the following, alcohol, cleaning fluids, hand creams or any solvents. These ingredients help breakdown the silicone polymer. Once broken down the putty will become softer.

Chief Chemist- FEI

Guess what — I was in the lab with the blue and cleaned up some of the 3D printer resin, using the only thing that works, isopropyl alcohol, 99%. I almost certainly had some on my hands at some point and totally destroyed my blue putty. 

So, my bad, and let this be a lesson to you. If you’d like the images and the Mathematica, you can grab it here. Let me know if you play around with it. May your procrastination be fruitful.