At Amazon’s Re:Mars conference, Alexa’s senior vice president, Rohit Prasad, showed off a surprising new voice assistant capability: the purported ability to mimic voices. As of now, there is no timeline on when or if this feature will be released to the public.
Even weirder, Amazon framed this imitation ability as a way to commemorate lost loved ones. He played a demo video of Alexa reading to a child in the voice of her recently deceased grandmother. Prasad emphasized that the company was looking for ways to make AI as personal as possible. “While AI can’t take away the pain of loss, she said, ‘it can definitely make memories last.’ An Amazon spokesperson told Engadget that the new skill can create a synthetic voiceprint after being trained on as little as one minute of audio from the person it’s supposed to replicate.
Security experts have long worried that fake deep audio tools, which use text-to-speech technology to create synthetic voices, would pave the way for a flood of new scams. Voice cloning software has enabled a number of crimes, such as a 2020 incident in the United Arab Emirates where fraudsters tricked a bank manager into transferring $35 million after posing as a company director. But deep fake audio crimes are still relatively rare, and the tools available to fraudsters are, for now, relatively primitive.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.