A Brief Speech Recognition History
From its earliest days in Bell Laboratories to the ubiquitous digital assistants of today, speech recognition has definitely come a long way. Back in 1952, Bell Labs designed its “Audrey” system, which was capable of recognizing digits spoken by a single voice – a remarkable feat for the time. A decade later, IBM unveiled its “Shoebox” machine at the 1962 Seattle World’s Fair, capable of recognizing 16 spoken English words in addition to the digits 0 through 9.
Speech Recognition as a Part of Everyday Life
Fast forward a few decades and these early, somewhat primitive systems have given way to the highly advanced, AI-enabled technologies that are virtually everywhere today. Dial a customer service call center and the odds of immediately reaching a human representative are slim to none. The more likely scenario is that you will be asked a few automated questions prior to finally being sent to a representative. Speech recognition technology has also helped make driving safer for motorists and pedestrians alike by enabling hands-free dialing and voice-activated navigation systems.
One of the biggest impacts of speech recognition is in making technology available and accessible to individuals with vision, mobility or other impairments. Voice-activated functionality is the most simple and intuitive method to interact with technology, as it represents a familiar interface that doesn’t place undue demand on the user.
Apple’s Siri is largely responsible for bringing AI-driven voice recognition to the masses. The culmination of decades of research, the world’s most recognized digital assistant has injected some humanity, and often, a bit of wit and humor into the oft-dry world of speech recognition.
Expanding to New Sectors with AI
Although speech recognition may have found its footing and exploded in popularity as a result of personal uses, years of training and mountains of data have enabled the technology to reach a point where accuracy levels are high enough to be applied in an enterprise setting.
Indeed, the market is experiencing a boom, propelled by the advancement of sophisticated AI technologies. The $55-billion speech recognition industry has been forecasted to grow at 11% from 2016 to 2024, providing massive opportunities in a variety of industries for smaller startups and for tech giants alike to grab market share.
Speech recognition technology works using algorithms through acoustic and linguistic modeling. Acoustic modeling refers to the connection between linguistic units of speech and audio signals, while language modeling matches those sounds with word sequences to differentiate between similar-sounding words.
Here’s a brief overview of some of the intricate AI technologies that are involved:
Machine learning (ML):A subset of AI, ML refers to systems that learn from experience and get “smarter” over time, without human intervention. ML is a method of training an algorithm so that it can learn how to perform a specific task. Training involves feeding the algorithm large amounts of data and allowing it to adjust and improve.
Deep learning (DL):Like ML, DL refers to systems that learn from experience. The difference is that DL is applied to much larger data sets.
Natural language processing: Part of AI that refers to systems that can process spoken or written language to understand meaning.
Neural networks: This is a technique or approach to ML that refers to biologically inspired networks of artificial neurons that mimic the structure and capabilities of the animal brain. It is a framework or model designed to help machines learn.
Context is Key
A key innovation that has spurred the evolution of speech recognition technology is the introduction of context-focused algorithms. It can often be hard to differentiate between two similar-sounding phrases without any background information. However, if the speech-to-text engine is fed with data about the subject matter, it can accurately convert the spoken word to text with minimal errors. After all, a conversation about “euthanasia” is likely to be very different from one about “youth in Asia”.
Future Speech Recognition Improvement
Although speech recognition technology has come a long way since the days of Audrey and Shoebox, there is still a long way to go. One area that offers significant room for improvement is in accent recognition. A recent study commissioned by the Washington Post found that Google and Amazon’s smart speakers were 30% less likely to understand non-American accents than those of native-born speakers. The issue persisted among certain American accents as well, as more rural or southern accents were less likely to be understood than northern, western or midwestern accents.
Speech analytics is another field that holds a lot of promise for future advancement. Turning unstructured data such as audio and video files into structured, text-based information will enable greater pattern recognition and the extraction of practical insights which is sure to have a great impact. This capability has the potential to revolutionize call quality monitoring, sentiment analysis, and will undoubtedly improve the speed and accuracy of workflows in a variety of industries, including speech recognition software for the legal and academic sectors.
From AI-powered digital assistants to personalized product recommendations when shopping online, artificial intelligence is all around us, with no signs of slowing down. And while it’s clear that significant progress has been made, it’s also evident that we’ve only begun to scratch the surface of what AI can do when applied to speech recognition and speech to text technologies.