At Amazon’s Re:Mars convention, Alexa’s senior vice-president Rohit Prasad exhibited a startling new voice assistant functionality: the supposed means to imitate voices. So far, there is no timeline by any means as to when or if this function might be launched to the general public.
Stranger nonetheless, Amazon framed this copycatting means as a option to commemorate misplaced family members. It performed an indication video during which Alexa learn to a toddler within the voice of his lately deceased grandmother. Prasad confused that the corporate was looking for methods to make AI as private as attainable. “While AI can’t eliminate that pain of loss, he said, “it can definitely make the memories last.” An Amazon spokesperson advised Engadget that the brand new ability can create an artificial voiceprint after being skilled on as little as a minute of audio of the person it is speculated to be replicating.
Security specialists have lengthy held issues that deep pretend audio instruments, which use text-to-speech expertise to create artificial voices, would pave the best way for a flood of latest scams. Voice cloning software program has enabled a lot of crimes, reminiscent of a 2020 incident within the United Arab Emirates the place fraudsters fooled a financial institution supervisor into transferring $35 million after they impersonated an organization director. But deep pretend audio crimes are nonetheless comparatively uncommon, and the instruments accessible to scammers are, for now, comparatively primitive.