This has long been the goal – a computer than can recognise conversational language with human accuracy. It is defined as Word Error Rate and humans generally get about 5.9% wrong. Microsoft says it is the lowest recorded against the industry standard Switchboard speech recognition task.
Suffice to say the researchers at Microsoft AI are chuffed and have published the paper in the Cornell University Library.
The abstract says, “The key to our system's performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training.”
The milestone means that, for the first time, a computer can recognise the words in a conversation as well as a person would. In doing so, the team has beat a goal they set less than a year ago — and greatly exceeded everyone else’s expectations as well.
“Even five years ago, I wouldn’t have thought we could have achieved this. I just wouldn’t have thought it would be possible,” said Harry Shum, the executive vice-president who heads the Microsoft Artificial Intelligence and Research group.
An important point to note is that the research team has reached parity with humans – it is not yet better. But part of that allows the computer to guess what a misspelled, or misheard word is, and to fill in any gaps.
How did they do it?
To reach the human parity milestone, the team used Microsoft’s Computational Network Toolkit (CNTK), a homegrown system for deep learning that the research team has made available on GitHub via an open source licence.
Huang said CNTK’s ability to quickly process deep learning algorithms across multiple computers running a specialised chip called a graphics processing unit vastly improved the speed at which they were able to do their research and, ultimately, reach human parity.
Microsoft has been on a roll lately. Last week another group of Microsoft researchers, who are focused on computer vision, reached a milestone of their own. The team won first place in the COCO image segmentation challenge, which judges how well a technology can determine where certain objects are in an image.
So what does this mean?
It has broad implications for consumer and business products that can be significantly augmented by speech recognition. That includes consumer entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana.
“This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.
Zweig said the researchers are now working on ways to make sure that speech recognition works well in more real-life settings. That includes places where there is a lot of background noise, such as at a party. They will also focus on better ways to help the technology assign names to individual speakers when multiple people are talking, and on making sure that it works well with a wide variety of voices, regardless of age, accent or ability.
In the longer term, researchers will focus on ways to teach computers not just to transcribe the acoustic signals that come out of people’s mouths, but instead to understand the words they are saying. That would give the technology the ability to answer questions or take action based on what they are told.
“The next frontier is to move from recognition to understanding,” Zweig said.