Asian American Supersite

Subscribe

Subscribe Now to receive Goldsea updates!

  • Subscribe for updates on Goldsea: Asian American Supersite
Subscribe Now

Desney Tan Uses Sound Waves for PC Gesture Control

Microsoft researcher Desney Tan has created a gesture-based control system using only a PC’s speakers and microphones.

Today’s dominant motion-sensing technologies, including Microsoft’s Kinect, use cameras to translate movement into digital instructions. Tan’s SoundWave software takes advantage of the Doppler effect and built-in speakers and microphones to allow personal computers to read and interpret simple gestures as commands.

The software is in its early stages of development, but it can already sense the unique changes to ultrasonic waves produced by a variety of hand gestures, says Tan, a principal researcher at Microsoft Research. He sees the potential to use SoundWave to control smart phones and other portable computing devices because they now come equipped with multiple speakers and microphones.

Last summer Tan was working on ways to use ultrasonic transducers to create haptic effects when a researcher in his group noticed that his body motion produced changes to the sound waves. The Doppler effect produced by his body motion was changing the tone of the sound waves being received by the microphones.

What makes the technology even more appealing is that no special equipment is needed to turn a typical PC into an ultrasonic motion detector. All that’s required is to install the SoundWave software which can generate a constant ultrasonic tone of between 20 and 22 kilohertz from standard speakers. The tone perceived by the computer’s microphone shifts to a higher frequency if an object — say, a hand — is moving toward and to a lower frequency if it’s moving away in accordance with the Doppler effect. The software is fast enough to translate the tonal changes into commands with no perceptible delay, and its accuracy has gone up to the 90% range.

Movements like a swipe of the hand up or down and toward and away from the body can produce commands that scroll through pages and perform other navigation functions. A user approaching a computer or walking away from it could be used to automatically wake it up or put it to sleep, says Tan.

Tan received his BS in computer engineering in 1996 from the University of Notre Dame. That was followed by a two-year stint in the Singapore Armed Forces during which he devoted time to “building bridges and blowing things up”.

He then returned to the US to earn his PhD in computer sciences from Carnegie Mellon University in 2004. In 2007 he was named one of MIT Technology Review’s Young Innovators Under 35 for his work in brain-computer interfaces.

Since 2004 Tan has been a researcher in the Visualization and Interaction Group at Microsoft Research where he now manages the Computational User Experiences group. He also holds an affiliate faculty appointment in the Department of Computer Science and Engineering at the University of Washington.

With his interest in human-computer interaction and brain-computer interfaces, Tan devotes his time to understanding and building applications for large displays, multiple device systems, wearable brain imaging devices and on a variety of projects in many other domains.