Complicated sports and movie watching

While I was on sabbatical in Gießen I was thrilled to have super (über?) fast internet and a nice television.

Unfortunately for me, the good folks at Apple / Netflix / HBO / etc, don’t want people in other countries to be able to easily access American™ feeds (and vice-versa of course, and I dig the whole ‘licensing’ thing, etc. Still, I paid for it and it would be nice to be able to access things I pay for when I am in places other than my usual places.)

So, what to do? Basically, set up a server back in America™, VPN into that thing, then try my best to convince the AppleTV in my apartment to access that feed. Turns out, this was more complicated than you’d think, since the AppleTV does bits of voodoo so that, even though it was connected to a VPN back in Saratoga, it still ‘knew’ it was in Germany. So basically my approaches involved various iOS devices, screen sharing, and voodoo strategies.

My first iteration looked like this:

(Note that I had improperly drawn the flag of Germany but unfortunately made it the flag of Belgium. I’m pretty sure they get that all the time… I erased it here.)

I wanted to watch Arsenal and, sure enough, whatever sport network my apartment building had was not really that ‘diverse’ in the sporting sense, so that’s how I did it. It was ridiculous.

So, to make it even more ridiculous, I went with version 2 here-

This required internet sharing on the laptop, hard wiring the AppleTV and using screen sharing only.

I’m sure neither of these work anymore, but found these Paper drawings I made to remind myself, and thought I should re-visit my insanity.

Gromitcam – in stereo

Back in 1997 I made a stereo webcam for keeping track of my dog, Gromit.

I was rolling through some archives and found an image from January 1998. It used two Logitech golf-ball cams and a bunch of ad hoc software. Pushed the frames to my machine at OSU Vision Laboratory. I’ll see if I can find some photos of the rig, but, I dare say it was one of the first stereo webcams ever. (Update: found some)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You can take the boy out of architecture school

rat palace

This is a rodent enclosure for behavioral experiments.

We printed the main cage, post retainers and lid with ABS from our uPrint and the ‘stay off the roof’ roof with our Formlabs in a nice, slick durable resin. There are 4 of these babies in Hassan’s lab, in our awesome 80/20 observation cages. All parametrically designed with OpenSCAD.

rat palace

 

 

 

Testre le embed das iframe

This is a test of an embedded iframe of fun and joy.

Please excuse anything annoying.

[advanced_iframe securitykey=”c8f174978f95886638babe6f19ad6e691566779b” src=”https://www.wolframcloud.com/objects/daf49e5e-2e03-4c1e-b1c4-b5750bbd5fdc?_view=frameless” width=”100%” height=”600″]

Most sincerely.

Bouncing Balls

A phase-space plot of height v. exaggeration for two different types of balls with two different coefficients of restitution.

Nothing important here, just a fun looking plot.

 

 

 

 

Fencing Eyetracking Data

A while back, we did a nice study on eye movements during fencing.

This is a figure from one of the posters, we should finish this and publish it some day. Should do a lot of things.

 

 

 

Medical Imaging, 1986 Style.

I was going through some slides for a talk today and I stumbled on this one from 1986 of a tool I wrote for analyzing cardiac ejection fraction.

You can tell it’s old because it’s in black and white.

It was written on a Sun II named jabba (I left the prompt at the top to show you) at the Computer Graphics Research Group at The Ohio State University. It used a client-server setup where the heavy duty image processing was done on a Convex mini-supercomputer (where ‘mini’ = the size of a small car) and the user interface was done on the Sun. Indeed, it used a ‘front end’ and ‘kernel’ model that Theo Grey and Stephen Wolfram adopted (not in any way due to my stuff, I’m certain) for Mathematica. Coincidentally I would go on to beta test early versions of Mathematica, befriend Stephen and Theo, and even work for WRI for a while.

I wrote a paper about this model, submitted it to the Convex User Group and received a Macintosh SE as a prize for it! (Scott Dyer, also of CGRG, wrote about a rendering engine they were producing and also received an Mac.) Nowdays, when I submit a paper, I wait for 3 months for it to be ‘peer reviewed’, make revisions to suit some reviewer or editor’s agenda and then pay for the privilege of it being placed in a ‘blessed publication (aka web server)’. Amen. Praise science. (This subject is a whole blog post in itself, of course.)

Part of this tool (the cine module) apparently remained the ‘go to’ play-back tool at CGRG for several years after I left. After I wrote this, I left and went to Pixar where I designed some medical imaging software user interfaces. This was in the stone ages of UX/UI, no one called it those things, (‘GUI’ maybe), we called it a ‘user in your face’.

Man we were clever.

Mmmm, fresh data

Some data from the summer project on the Uncanny Valley.

Can’t give it all away yet- but this is cool so far.

soon, soon

Travels in the uncanny valley

Here’s a screencast of my talk at the Rochester Institute of Technology Symposium on VR/AR Technology.

It is from 3 December and a little low-key, so drink a little coffee before you start watching so you don’t fall asleep. Amazingly, the MAGIC Center had no lecture recording, so I’ve recreated it here with about 60% less energy because I was sitting down the whole time.

Enjoy.

Abstract


In 1970, Masahiro Mori posited the existence of an “uncanny valley” in the human perception and appreciation of robotics. This hypothesis suggests that, as robots become more ‘human like’ that our appreciation of them increases — right up to a crucial point where we find them too realistic but not perfectly so. At this point, human preference plummets into the aforementioned chasm. Despite the fact that much has been written about this phenomena, there is little scientific evidence of its existence. Obviously this phenomena isn’t limited to robotics and has been observed in many other areas. These include the fine arts, especially in photorealistic painting, sculpture, computer graphics, traditional animation, and more recently in the rotoscoping / motion capture used in contemporary filmmaking. The informal and heuristic practices of the fine arts, especially those of traditional animation have much to offer to our understanding of the appearance of reality. In this talk, we will examine some historical perspective of the UV as well as virtual and augmented reality-centric questions.