We demonstrate a method capable of turning textured surfaces (in particular textile patches) into opportunistic input interfaces with a machine learning model pre-trained on acoustic signals generated by scratching different fabrics. A single short audio recording is then sufficient to characterize both a gesture and the textured substrate. The sensing method does not require special coating, additional sensors, or wires. It is passive and works well using regular microphones, such as those embedded in smartphones.
Our prototype is capable of simultaneous texture/gesture recognition as long as the microphone is close to the input surface (e.g. under the fabric), or when the patch is attached to a solid body transmitting sound waves. This research paves the way for collaboration between wearables researchers and fashion designers that could lead to the design of signature-robust stitched patches not compromising aesthetic elements.