WTC Mathematica Live Coding Championship
This Thursday, 8 October, at 6 PM EDT, the Wolfram Technology Conference Live Coding Challenge. As in years past, I will be your genial co-host, along with Philip Maymin –
It’s a total blast to watch people competitively code, and really impressive to see how the Wolfram Language / Mathematica makes these types of problems straight forward to solve economically. Tune in at twitch.tv/stephen_wolfram to watch and vote for the winners or Youtube.com/user/WolframResearch if you just want to watch passively and don’t want the responsibility. (Watch and vote even if you have no idea what’s going on —
Bouncing Balls of Exaggeration
A poster from VSS2019, finally about to submit the paper. Looking at animation, physics, and the impossible yet perceptually preferable exaggeration therein.
Phillips, F., Schmidt, F., Noejovich, L., & Chakalos, G. (2018). Exploring the Uncanny Valley. Journal of Vision, 18(10), 348-348.
History of Jazz Drumming
Mel Lewis’ epic interviews with Loren Schoenberg on the history of drumming in jazz.
A good while back, Jim Rupp lent me a bunch of CDs. He said “you’ll like these” and he was right.
For years, I had hosted them on Skidmore’s internal web servers. Hudson! Well, now that I’m leaving for RIT and Hudson is being decommissioned, it’s time to migrate.
So — if you’re interested, I’m moving the recordings to drums.flipphillips.com. I still have to edit the last four episodes, but at least everything is there for downloading.
Dangerous awards
A visit to Champaign, a return flight with a ‘possible weapon’.
I went to the 2019 Wolfram Technical Conference last week in snowy Champaign Illinois. I gave a talk on some stuff we were doing in our classes that used the Wolfram Language, some Arduino hardware, some big data from fruits. It was fun – though I had the dreaded last talk on the last day, so a big thanks to the folks who stuck around.
It was during happy hour, so I thought I should maybe have bought everyone drinks?
I got to co-MC the Live Coding Championship —
We have a winner! Congratulations to Gerli Jogeva, our LiveCoding champion 🏆 #WolframTechConf pic.twitter.com/NpXezYrI7E
— Wolfram (@WolframResearch) October 31, 2019
which, if you’re clever and bored enough, you can find on Twitch somewhere. I made a ‘waning gibbous’ joke that I probably should take back. Gerli dominated #girlswhocode, it was really great fun.
I also, a little unexpectedly, was the recipient of a Wolfram Innovation Award —
which was super nice and a great honor. I now have one ‘moment of prestige’ in common with Nassim Nicholas Taleb. That and, on my deathbed, I will receive total consciousness.
So I got that goin’ for me, which is nice.
On the way through TSA at the Champaign-Urbana airport (Sign: “TSA is out, but will be back at 9:00AM”), my bags were searched because of ‘something dangerous’ — AKA the award. The agent congratulated me though, making the dirty-underwear and t-shirt dishevelment of my bags a little more bearable.
On that fight, I sat next to wheelchair racer Jenna Fesemyer, a fellow Ohioan, who was on her way to the NYC Marathon (where she placed 4th/7th in US/Internationally!). I bothered her with endless conversation about bike technology. She humored me, which was also nice.
All in all, and despite all the ‘stuff’ I had to do, it was really a lot of fun and I came back not so much tired1 I did have to chainsaw up a big ol’ tree that fell down in the wind. So that was more tiring. But — I’m a recreational lumberjack now. but rejuvenated and inspired.
Apples and Oranges
A talk I gave at the Wolfram Technology Conference, 2019.
In our “Computational Methods for Psychology and Neuroscience” course, we teach undergraduate students the fundamentals of computational thinking (as opposed to traditional “programming”) using a project-based approach. Over the years project topics have ranged from linguistics, video image analysis, Dynamic[] driven data collection, analysis and presentation, machine learning, and beyond. Most recently, we chose colorimetry and psychophysics as our project theme. Using the Connected Devices framework and an Arduino for data collection, we build a machine learning model from publicly available hyperspectral data that could reliably discriminate fruit types from simple, low-dimensional spectral scans.
The resulting project was well received by students and covered a broad range of topics that are useful in neuroscience including: procedural programming of the Arduino, basic electronics, sensor based data acquisition, functional programming in Wolfram Language, instrument calibration, analysis, visualization, and machine learning. Here we discuss the various challenges and successes in this 15-week class.
Here’s a link to the presentation.
And here’s a version of it rendered from the Wolfram Cloud –
Two recent submissions
So many things on the burner.
First – The Veiled Virgin paper has been submitted. We didn’t put it on a preprint server yet, we’re trying to figure out what we want to do there. If you’d like a copy of the preprint, email me.
Second – The material perception paper has been put up on arXiv. Go check it out at https://arxiv.org/abs/1908.00902 – if you dare! Let me know what you think.
So, about RIT
Time for a change of venue.
A long time ago I left Pixar and went back to graduate school because I was interested in understanding more about the creative stuff we did — How the tools facilitated, constrained, and otherwise ‘stereotyped’ what was created.
I’m still interested in that stuff.
I got in at the two places I applied — OSU, back in the Architecture and City Planning department of my undergraduate misadventures, and MIT in the Media Lab. After a whole bunch of soul searching, I ended up back at Ohio State1The fact that the grad student stipend was the same in Cambridge Mass and Columbus Ohio had a little bit to do with it, but it was more complicated of a decision than that. While I was there, I met a guy in an early morning27:30AM. Breakfast with Lester Krueger. vision science class, Vic Perotti3Who, with amazing coincidence, is presently on the faculty at RIT, who was doing vision research — using computer graphics! He introduced me to his advisor, Jim Todd, and I abandoned architecture school a second time for the siren call of the pixel.
I’ve spent the past 21 years in Saratoga Springs — just a beautiful place — studying and doing a lot of teaching about human perception, mostly vision. Skidmore has been a great place to be faculty. My niggles are trivial and my complaints few. I have great colleagues, made a few friends, and taught some great students. My family loves it here too.
After an extended courtship, RIT won me over with the opportunity to research and teach the shiny things that distracted me away from Pixar in the first place. I will be a ‘Professor of Motion Picture Science’4Now that is a $5 title! I’ll take it.at the newly established MAGIC center.
Thirty years ago, the opportunity to do this sort of stuff in an academic setting was pretty limited. In grad school I did a little work at ACCAD which was what the Computer Graphics Research Group5Where I spent the last 2+ years of my undergraduate years. had transmogrified into as Chuck Csuri retired. This was a time of great growth and hiring in the graphics and entertainment universe. Almost every graduate program on earth had to bend itself to the pragmatics of training folks for that scene. I’m OK with training, but I prefer teaching. I like learning but prefer researching. That’s why my time at Skidmore doing basic vision research was so rewarding.
Now I get to teach and research what I really wanted to do a few decades back. I get to do it in an exciting new center focused on real interdisciplinarity, along side a whole armada of similarly-disposed colleagues and students — Old friends from grad school, new friends from my sabbatical a few years back, a Skidmore parent who helped convince me to come visit RIT in the first place, and a student from my lab who ended up at RIT and is now professing himself.
I can’t wait.
Temporal consequences of spatial acuity reduction
Space-time visual insanity.
Some work I collaborated on concerning spatiotemporal vision. We have some very interesting findings hinted at in the ‘Puzzles’ section that I look forward to us getting out there.
Temporal consequences of spatial acuity reduction
Pawan Sinha, Sid Diamond, Frank Thorn, Jie Ye, Flip Phillips, Sharon Giliad-Gutnick, Shlomit Ben-Ami and Sruti Raja – MIT Brain and Cognitive Sciences, Wenshou Medical College, Skidmore College Psychology and Neuroscience.
Various eye conditions, such as cataracts and refractive errors, induce spatial blur in the retinal image. This, by definition, reduces high spatial frequency content. How, if at all, does this impact the temporal structure of the visual input? What are the implications of any such spatio-temporal linkage?
Effects of the Spatial Spectrum of Illumination on Material Perception
A little VSS 2019 fun with some old friends. Old as in length of time we’ve all known each other, not in geologic age.
Like all good science, things changed a little between abstract submission and the actual stuff we’ll present. This is mainly about the re-modeling of material properties by manipulating the global illumination. We have other experiments that cover the stuff in the abstract, but it just seemed to me like too much stuff to put in one poster. You’d be there for an hour while we walked through it. Also, as a direct challenge from a colleague who noticed that I used the most-words-ever on last year’s poster, I went totally minimal here.
Effects of the Spatial Spectrum on the Perception of Reflective and Refractive Materials
Flip Phillips, J Farley Norman, and James Todd – Skidmore, WKU, OSU
Highly reflective and refractive materials such as gemstones, polished metals, shimmering water, glazed ceramics and the like, act as touchstones of visual wonder for humans. While this might simply be indicative of a “sparkly good!” mechanism of prehistoric origin, the question remains how the human visual system uses this information to identify materials. Since the 15th century, painters (e.g., van Eyck, Heda, Claesz) have been acutely aware of the depiction of these materials. Even contemporary comic illustrators make it a priority to depict this phenomenology via denotative mechanisms like ‘lucaflection’ (Mort Walker).
It is intuitively tempting to assign the heavy lifting of material perception to the specularity of the material. Indeed, transparency and translucency seem to be special cases of our day-to-day experiences with materials — the vast majority of which that seem relatively opaque. However they are frequently not as opaque as they may seem (grapes, for example) and even those that are completely so still have sub-surface interactions with light that make for complicated depiction.
In a series of experiments we show that the spatial composition of the illuminating environment has a strong effect on material perception of non-trivial objects made from ostensibly opaque materials. Broad (i.e., low-frequency dominant) fields of illumination result in fiducially black materials to be perceived as ‘metal’ while sparse fields (small, isolated high frequency information) biased perception of metal toward ‘black plastic’. Preliminary work with transparent and translucent materials suggests the same mechanisms may be at work — The structure of refracted environmental information plays an even more significant role than that of the specular highlights. Finally, multi-scale analysis of the illumination environment shows clustering more consistent with the empirical perceptual impressions of the surface than with the actual surface material.
Objects, Materials, Exaggeration, and Perception
For a talk @ the ASU SciHub SciAPP Workshop on Science, the Arts & Possibilities in Perception.
It is tempting to think of perception as some form of physical measurement. Indeed, animals seem to act as if they are constantly using their sensory systems to quantify their world — Distances before jumping, colors before eating, trajectories for catching, and so forth. Similarly, as much as we fetishize the ‘brain as computer’ metaphor, it isn’t 100% clear that, beyond some extremely simple analogs, the brain does anything resembling digital computation. Does an animal’s perception and action depend on range finders, spectrophotometers, thermometers and the like for input? Do we compute with this input and use it to drive servo-like motor operations? If not, then what is a plausible alternative?
This talk will outline some of the ways in which the human visual system is relatively unconcerned with accurate or even plausible physical mensuration. Specific to this meeting’s aims — producers of visual media have been aware, at least tacitly, of this insensitivity since the earliest production of images. This rich (but sometimes ‘secret’) font of heuristic information can act as inspiration for understanding our perception of the visual world.
For example, painters know that a geometrically and photometrically correct projection of the world onto an image plane is mostly immaterial to our ability to understand an image. Animators know that exaggerating motion in just the right ways makes it look more realistic. Sculptors create striking diaphanous objects using dense and opaque materials. We will show examples and empirical investigation into this phenomenological psychophysical universe.