Much of our interactions with machines today are cold, manual, and passive. We simply push a few buttons and wait, often impatiently, for our electronics to do what they’re supposed to. In a world where we now spend more time staring at the screens of our portable electronics than we do actual people, it would be nice to interrupt the technological feedback loop with a little emotional communication. Enter in Rapport, a project that is attempting to bridge the gap between human emotion and the machines we’ve come to enjoy.
Created by designers Joe Smith, Yeawon Choi, Yifei Chai, and Yuchang Chow, Rapport is a device that tracks, analyzes, and reacts to your facial expressions in order to select a music playlist that suits you the best. To start, the observer must make eye contact with the device. The device then leans forward and analyzes the observer’s facial expression (and also the time of day) to determine which song might suit the observer’s mood. Rapport then plays the song at low volume, and if it detects excitement in the observer’s face, such as smiling, Rapport will increase the volume. If the device does not detect excitement, it will change the song, and if it detects surprise, Rapport will shrink back and return to its original setting.
While simple in aesthetics, the inner workings of the device are quite complex. The insides of the device feature a web cam, a computer, several stepper motors, and 4 basic cables. The device utilizes 4 different software programs including Visual Studio (stores the facial recognition library and eye tracking code), Processing (runs the facial recognition library), Max/Msp (controls volume and curates music) and Arduino (drives the stepper motors inside the device).
The team behind the project hopes to make interaction with machines more seamless and natural. For example, the technology behind Rapport could be used in an educational setting to detect a student’s anxiety or excitement and deliver encouragement.
To see Rapport in action, check out the video here.