AI and Sound Combine to Create Terrifying Mac Attack, Say Researchers

AI and Sound Combine to Create Terrifying Mac Attack, Say Researchers

A UK‌ research team based at Durham University ⁣has identified an exploit that could allow⁣ attackers to figure out what ⁢you type on your MacBook Pro — based on⁤ the sound each keyboard tap makes.

These kinds ‌of attacks aren’t particularly new.​ The ⁣researchers found research ‍dating⁣ back ⁢to the 1950s into using acoustics to identify what people write. They also‍ note that the first paper detailing⁤ use of such an attack ‍surface⁢ was written⁤ for⁣ the US National Security Agency (NSA) in 1972, prompting speculation such attacks may already be in place.

“(The) governmental⁣ origin of AS- CAs creates speculation that such an ⁣attack may already‍ be possible on ‍modern devices, but remains ⁢classified,” the researchers wrote.

There’s little doubt that if the‍ US and‍ UK⁤ governments have been exploring such exploits,​ others will be, too.

How ⁤the attack works

As⁢ reported by⁢ Bleeping Computer, the UK security researchers figured​ out how to identify what you type with an accuracy‌ of up⁢ to 95%. The attack, which​ uses a combination of audio and AI, is not confined to Macs.

The exploit is explained in greater detail here, but ⁢it isn’t completely straightforward. The attacker⁣ needs to calibrate the ‍sound of your keys to the relevant character first in order to train the AI. That means identifying the specific sound of‍ each key press, though this⁢ could be achieved during a Zoom‍ conversation if you happen to be typing⁣ in chat while ‍your⁤ Mac⁣ keyboard is audible to others in the meeting.

Once the attack algorithm matches each sound to each key, the research claims ​it will ‌capture what you‍ type.⁢ “The researchers ​gathered training data by pressing⁣ 36 keys on a modern MacBook ⁤Pro 25 ‌times each and recording the sound produced by ‌each press,” the white paper explains.

What this means

At its simplest, the nature of such⁣ attacks means that if someone can access your computer and record that training data⁢ — or can find some other ‌way ‍to ⁣listen to and identify ‍the sound your ⁣keyboard makes when you type — they can use AI to monitor your work quite accurately. All they need is to be able to listen.

The ​microphone used to ‍listen can ‍be the one you leave on in Zoom, the‌ one inside a hacked ​smartphone, or an ⁢app with access to the​ microphone ‌in abuse of the privacy agreement you expect from that app. The mic could ‍even ⁤be a conventional snooping device, and once it’s‌ in place, the deep learning algorithm could allow attackers ⁢to gain‌ access to sensitive ⁢data, passwords, and more.

What ⁢next?

Concerning as the ⁣exploit might seem, it is also a good illustration of ⁣how AI can be ‌used in novel ways to undermine security perimeters in new ways. This will become even ‌more problematic as the cost of quantum computing declines, because those machines can churn through data⁤ so much faster than the computers we use today.

In theory, these quantum computers could break the cryptographic keys upon which ⁣the‍ internet ⁤depends in ​a few hours, meaning…

2023-08-09 00:00:04
Post from www.computerworld.com rnrn

Exit mobile version