Sheldon Cooper/SOPA Images/LightRocket via Getty Images
- Microsoft announced it is working on a text-to-speech artificial intelligence tool.
- VALL-E can clone someone’s voice from a 3-second audio clip and use it to synthesize other words.
- It came as the tech giant plans to invest $10 billion in OpenAI’s writing tool ChatGPT.
Microsoft, which has plans to invest $10 billion in ChatGPT, is working on an artificial intelligence called VALL-E that can clone someone’s voice from a three-second audio clip.
VALL-E, trained with 60,000 hours of English speech, is capable of mimicking a voice in “zero-shot scenarios”, meaning the AI tool can make a voice say words it has never heard the voice say before, according to a paper published by Cornell University in which the developers introduced the tool.
VALL-E uses text-to-speech technology to convert written words into spoken words in “high-quality personalized” speeches, according to the 16-page paper.
It used recordings of more than 7,000 real speakers from LibriLight– an audiobook dataset made up of public-domain texts read by volunteers – to conduct its sampling. The tech giant released samples of how VALL-E would work, showcasing how the voice of a speaker is cloned.
The AI tool is not currently available for public use and Microsoft hasn’t made it clear what its intended purpose is.
The researchers said the results so far showed that VALL-E “significantly outperforms” the most advanced systems of its kind, “in terms of speech naturalness and speaker similarity.”
But they pointed out the lack of diversity of accents among speakers, and that some words in the synthesized speech were “unclear, missed, or duplicated.”
They also included an ethical warning about VALL-E and its risks, saying the tool could be misused, for example in “spoofing voice identification or impersonating a specific speaker”.
“To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E,” the developers wrote in the paper. They didn’t give details of how this could be done.
They added that “if the model is generalized to unseen speakers in the real world, it should include a protocol to ensure that the speaker approves the use of their voice.”
Microsoft didn’t immediately respond to a request for comment by Insider.