L. Zhuang, F. Zhou, J. D. Tygar, "Keyboard Acoustic Emanations Revisited", in 12th ACM Conference on Computer and Communications Security (CCS'05), pp. 373-382, November 2005. Abstract We examine the problem of keyboard acoustic emanations. We present a novel attack taking as input a 10-minute sound recording of a user typing English text using a keyboard, and then recovering up to 96% of typed characters. There is no need for a labeled training recording. Moreover the recognizer bootstrapped this way can even recognize random text such as passwords: In our experiments, 90% of 5-character random passwords using only letters can be generated in fewer than 20 attempts by an adversary; 80% of 10- character passwords can be generated in fewer than 75 attempts. Our attack uses the statistical constraints of the underlying content, English language, to reconstruct text from sound recordings without any labeled training data. The attack uses a combination of standard machine learning and speech recognition techniques, including cepstrum features, Hidden Markov Models, linear classification, and feedback-based incremental learning. @inproceedings{1102169, author = {Li Zhuang and Feng Zhou and J. D. Tygar}, title = {Keyboard acoustic emanations revisited}, booktitle = {CCS '05: Proceedings of the 12th ACM conference on Computer and communications security}, year = {2005}, isbn = {1-59593-226-7}, pages = {373--382}, location = {Alexandria, VA, USA}, doi = {http://doi.acm.org/10.1145/1102120.1102169}, publisher = {ACM Press}, address = {New York, NY, USA}, } http://dx.doi.org/1102120.1102169