Researchers at the University of East Anglia in the UK developed an algorithm that is able to interpret mouthed words with a greater degree of accuracy than human lip readers.
“We’re looking at visual cues and saying how do they vary? We know they vary for different people. How are they using them? What’s the differences? And can we actually use that knowledge in this particular training method for our model? And we can,” says Dr. Helen Bear who created the visual speech recognition system as part of her PhD, along with Prof Richard Harvey of UEA’s School of Computing Sciences.
According to Dr. Bear, the core challenge is that humans make more sounds than distinct visual cues. For example, there are several sounds with confusingly similar lip shapes such as ‘/p/,’ ‘/b/,’ and ‘/m/’ — all of which typically cause difficulties for human lip readers. UEA’s visual speech model is able to more accurately distinguish between these visually similar lip shapes.
This technology may one day help people who have hearing and speech impairments, generate audio for video-only-security video footage or enhance poor audio quality on mobile for video calls.