Tuesday, May 2, 2017
Alexa learns to whisper and speak like humans do
Voice synthesis has come a long way since those days when computers sounded like robots, but there's still a long way to go and Amazon takes another step forward by teaching Alexa to speak with added emotions and human-like characteristics.
Current digital assistants have very high quality voices, but it doesn't take long to notice that they are too artificial when it comes to actually speaking for more than a few words. People don't talk like that, they have variations in speed, pitch and volume, and that's what Alexa is about to learn thanks to Amazon's Speech Synthesis Markup Language.
This SSML gives developers the tools for them to tune Alexa's speech according to their needs, allowing to express "feelings" by speaking words more loudly, or to whisper something, or even to bleep out certain words that are better unsaid. There's also "speechcons", which are the audio equivalent to emoticons, allowing a phrase to better said using a special expression like "Eureka!", "ahem", "yay", etc.
Now it's up to developers to put this SSML to good use, and bring us a more human-like sounding Alexa in the coming months.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment