Hackers might use artificial intelligence to acquire user passwords with near-perfect accuracy by "listening" to an unsuspecting person's keystrokes, according to a worrying study published earlier this month.
A group of computer scientists from the United Kingdom created an artificial intelligence model to recognise keyboard sounds on the 2021 version of a MacBook Pro ˇŞ dubbed a "popular off-the-shelf laptop."?
According to Cornell University's study results, when the AI programme was launched on a nearby smartphone, it could duplicate the inputted password with a stunning 95% accuracy.?
During a Zoom video chat, the hacker-friendly AI tool was also incredibly accurate while "listening" to typing over the laptop's microphone. According to the researchers, it duplicated the keystrokes with 93% accuracy, a record for the medium.
The researchers warned that many users are unaware that malicious actors could monitor their typing to breach accounts - a hack known as an "acoustic side-channel attack."
"The ubiquity of keyboard acoustic emanations not only makes them a readily available attack vector but also prompts victims to underestimate (and thus not try to hide) their output," according to the report.?
"For example, when typing a password, people will frequently hide their screen but do little to mask the sound of their keyboard."?To test accuracy, the researchers pushed 36 keys on the laptop 25 times apiece, with each press "varying in pressure and finger."
The programme could "listen" for distinguishing features of each key press, such as sound wavelengths.?
The iPhone 13 mini was positioned 17 centimetres away from the keyboard. Joshua Harrison of Durham University, Ehsan Toreini of the University of Surrey, and Maryam Mehrnezhad of the Royal Holloway University of London conducted the study.?
Another risk element for emerging technology is the likelihood of AI technologies assisting hackers.?
Several prominent academics, like OpenAI founder Sam Altman and entrepreneur Elon Musk, have cautioned that AI might pose a substantial risk to humans if sufficient safeguards are not in place.??
According to the authors, these types of attacks are understudied but have a lengthy history. According to the authors, "acoustic emanations" were a weakness in a partially disclosed NSA document from 1982.
The study adds to recent concerns about how artificial intelligence capabilities could be used to threaten security and privacy.?
According to Insider, AI capabilities can make online scams harder to identify because AI makes it simpler to personalize scams for each target.??
Zoom's given statement emphasizes the company's commitment to prioritizing its consumers' privacy and security. The company recognizes the significance of these factors and provides specific actions that customers can take to improve their privacy and security during Zoom meetings.
ˇ°Zoom takes the privacy and security of our users seriously. In addition to the mitigation techniques suggested by the researchers, Zoom users can also configure our background noise suppression feature to a higher setting, mute their microphone by default when joining a meeting, and mute their microphone when typing during a meeting to help keep their information more secure.ˇ±
What do you think about it? Do let us know in the comments.
For more trending stories, follow us on Telegram.