Hands are the chief appendage with which we manipulate the world around us, creating sounds as they go. As such, they are a rich source of information that computers can leverage for input and context sensing. Indeed, many prior works in HCI have explored this idea by instrumenting users' hands with a microphone, often integrated into a ring, wristband, or watch. In this work, we explore an alternative bare-hands approach - by using a microphone array integrated into a user's headset/glasses, we can use beamforming to create a virtual microphone that tracks with the user's fingers in 3D space. We show this method can capture even the subtle noise of a finger translating across surfaces, including skin-to-skin contact for micro-gestures, as well as passive widget interactions.
Research Team: Daehwa Kim, Chris Harrison
Daehwa Kim, and Chris Harrison. 2026. SoundBubble: Finger-Bound Virtual Microphone using Headset/Glasses Beamforming (CHI '26). Association for Computing Machinery, New York, NY, USA. DOI: https://doi.org/10.1145/3772318.3791589