General News

UK researchers use AI model to record keystrokes with accuracy over 90%

Researchers in the United Kingdom have used artificial intelligence technology to record the sound of keystrokes with surprising accuracy. 

A recent study, reportedly published as part of the IEEE European Symposium on Security and Privacy Workshops, simulated a cyberattack in which a deep learning model classified laptop keystrokes using audio from the video-conferencing platform Zoom and a smartphone-integrated microphone. 

Computer scientists from Durham University, University of Surrey and Royal Holloway University of London said that, when trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%. That’s the highest accuracy seen without the use of a language model. 

When trained on keystrokes recorded using Zoom, an accuracy of 93% was achieved. 

RESEARCHERS WARN ‘HUMANS CANNOT RELIABLY DETECT’ AUDIO DEEPFAKES EVEN WHEN TRAINED

Further research would be necessary to determine whether other discreet methods of recording maintain similar effectiveness, the authors said. 

“Our results prove the practicality of these side channel attacks via off-the-shelf equipment and algorithms,” they said, asserting that with developments in deep learning, “acoustic side channel attacks present a greater threat to keyboards than ever.”

WHAT IS AI?

Laptop computer (with M1 chip). (Phil Barker/Future Publishing via Getty Images)

To avoid such an attack, also known as an ASCA, the researchers said results from the study imply that “simple typing style changes could be sufficient.” 

“When touch typing was used, [additional computer scientists] saw keystroke recognition reduce from 64% to 40%, which (while still an impressive feat) may not be a high enough accuracy to account for a complex input featuring the shift key, backspace and other non-alphanumeric keys,” they noted. 

A second line of defense would be the use of randomized passwords with multiple cases, and the authors said it is hard to work out when someone releases a shift key.

OPENAI RELEASES WEBCRAWLER GPTBOT, HOW TO BLOCK IT

The Zoom app

In this photo illustration the logo of Zoom can be seen on a smartphone on March 10, 2022, in Berlin, Germany.  (Thomas Trutschel/Photothek via Getty Images)

They detailed sound-based countermeasures to such attacks and efforts to add randomly generated fake keystrokes to transmitted audio, noting the addition of two-factor authentication. 

“As more laptops begin to come with biometric scanners built in as standard, the requirement for input of passwords via keyboard is all but eliminated, making ASCAs far less dangerous. However, as stated in [additional research], a threat remains that data other than passwords may be retrieved via ASCA,” they highlighted.

WHAT IS CHATGPT?

Photo of Apple iPhone 13 Minis

Apple iPhone 13 Minis seen on display inside an electronics store in Bratislava, Slovakia.  (Stanislav Kogiku/SOPA Images/LightRocket via Getty Images)

The researchers noted that more recent studies showed compromised smartphone microphones have repeatedly inferred text typed on touchscreens with concerning accuracy and that muting microphones or not typing at all lost some feasibility with a return to remote work amid the COVID-19 pandemic. 

CLICK HERE TO GET THE FOX NEWS APP 

“The diminishing of these countermeasures creates concern that as the prevalence of technology required for these attacks increases, further countermeasures will prove insufficient,” the study said. 

“I can only see the accuracy of such models, and such attacks, increasing,” Dr. Ehsan Toreini, co-author of the study at the University of Surrey, told The Guardian on Tuesday, noting that — with smart devices bearing microphones becoming ever more common in households — such attacks highlight the need for public debates on governance of AI.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button