Up Next

Transcription firm Verbit eyes new languages with $31M funding round

Back

AI-powered transcription company Verbit on Wednesday announced it has received a $31 million round of Series B funding that will allow it to expand its offerings.

The company’s technology, which uses machine learning to recognize human speech and transcribe it in real time or from a media source, is used by more than 150 organizations, including legal groups, media companies and higher education institutions such as Harvard and Stanford, according to the company. This new funding, the company’s executives told EdScoop, will allow them to offer their products to other industries, and also add new languages while improving the technology.

Verbit’s Scott Ready said that while speech-recognition and transcription technology is traditionally considered within education to be of primary use for students who are hard of hearing or deaf, including transcriptions of live presentations, or of audio and video content, can benefit all students.

“Research has proven that captioning and transcription benefits all students,” Ready said. “So when you are looking at how technology can come in to reduce costs and increase accuracy, then you’re looking at changing captioning and transcription from an accommodation feature to a learning feature.”

Transcription is a tedious and resource-intensive process, which has made it a prime target for software developers in recent years. Verbit, which is based in Tel Aviv, Israel, is joined in the market by a host of other companies using speech recognition technology, such as Amira Learning, Soapbox Labs and Bamboo Learning, all of which raised venture funding last year.

“The technology allows us to provide a service that people are in need [of], and there are not enough people that are available to provide that service,” said Verbit executive Jacques Botbol. “And the technology is an enabler of that.”

Verbit’s Series B funding round was led by the New York-based growth equity firm Stripes, adding to the company’s existing investors: Viola Ventures, Vertex Ventures, HV Ventures, Oryzn Capital and ClalTech. The company’s total funding is now $65 million after a $23 million Series A round in January 2019.

Up Next

Webinar Recap: How AI Empowers Personalized Learning

Providing students with more personalized learning has been a goal of university leaders for years. Experienced higher education professionals understand that each learner brings a unique blend of experiences and expectations to the desk and can therefore benefit from multiple methods of consuming course content.

In the past, educators have tried to enlist learning management systems to develop various learning paths. The process often proved to be cumbersome and defeating for instructors. Yet the growth of artificial intelligence has changed things, said Scott Ready, a higher education and accessibility expert of more than 30 years.

A Live Discussion on Video

Ready, Verbit’s Accessibility Evangelist, explored this idea during Verbit’s live webinar with Rob Lipps, EVP of Mediasite. The discussion focused on how artificial intelligence is being implemented to both solve accessibility challenges and account for key changes in media consumption, which have influenced student’s classroom expectations.

Lipps spoke on how today’s students expect almost everything to be video-focused.

“60 percent of all network traffic on the Internet is video, and in just two years, it’s going to be 80 percent. It’s phenomenal,” he said.

Universities are implementing more digital and video components in the classroom as a result.

“The creation of content in higher education is exploding,” Lipps said. “We see universities creating [up to] 70,000 hours of video a year from classrooms, not just from desktops and phones, but actually recording of lectures and publishing. “We’re talking about learners that have never known a world where they didn’t have video for everything, and it can be a bit disruptive to them to show up in an educational setting and not have access to the video that they’re accustomed to through the rest of their lives.”

This video-first mentality can present an interesting challenge for universities trying to create strong video content and meet the expectations of students.

The Netflix Mentality

Today’s students want video resources, but they also don’t want to spend time searching for them. They want systems to be advanced enough to suggest items for them, Lipps said.

“When we’re talking about student experiences and taking preferences into account, one thing that we notice is that students are used to companies knowing an awful lot about them,” Lipps said. “They’re used to Netflix knowing a lot about them… Netflix tends to know what they’re looking for before they look for it. I think they want that experience also in education.”

Keeping up with that expectation can be difficult. AI tools and their accompanying data present huge opportunities to customize learning experiences. AI engines present the ability to make search easier and make suggestions possible.

Universities are spending significant amounts of time and money to create video content, but are likely missing out when not reviewing data to ensure that learners are engaging, comprehending and retaining it.

Using Data to Inform the Learning Experience

Data can help to inform the quality of the learning experience, especially when it comes to video, Lipps said.

University leaders can look at analytics like time spent with a video or simply student performance improvement metrics. They can also look at how the relationships between professors and students or students and their peers are impacted to see if video improves communication and engagement.

When considering data, it’s also interesting to note that regardless of disability, most video consumers are using captions rather than listening to video out loud. 85 percent of video content on Facebook alone was consumed silently with captions, Ready noted.

Lipps refers to this as data enrichment.

“I don’t always turn captions on in a video that I’m watching, but if they’re on, I almost never turn them off because I actually read faster than I listen… I actually find myself wishing that the video moved a little faster when the captions are on because I’m comprehending quicker what’s actually being said,” Lipps added.

Providing these enriching technologies, including AI, to offer all students live captions during lectures can have a huge impact on the quality of the video experience you’re delivering to them.

“We tend to talk a lot about accessibility [regarding captions and AI uses]…and that certainly is a driving factor, but the needs of the disabled community can have a tremendous impact, positive impact on all learners of all abilities. I think these are convergent initiatives,” Lipps said.

AI’s Impact on Academic Video

Lipps also provided viewers with insights from a recent survey Mediasite conducted with University Business where university leaders were polled on their expectations of AI. Their top two reasons for considering AI technologies were to aid in their accessibility compliance initiatives and create more personalized content delivery.

“The more video you create, the more that accessibility need blooms. It becomes an interesting opportunity to help these solutions come together and not just help students with disabilities, but all student learners,” Lipps said.

The partnership between Verbit and Mediasite is one example of driving this dual use case forward.

“When we look at these research studies, they all point to [the fact that] student performance improves, content material is reinforced, focus is maintained, and comprehension is enhanced by having captions added to video. It’s not just for the deaf and hard of hearing any longer. It really is something that we are all consuming,” Ready said.

Yet while captions are helpful for all, it’s crucial that they effectively serve the needs of the students who rely on them. It’s therefore best practice to select an AI-based solution that ensures accurate captions are being placed on your videos to guarantee student success and compliance.

“Our automatic speech recognition engine was developed within Verbit, because we couldn’t find one commercially provided that was accurate enough,” Ready said.

Verbit’s automatic speech recognition engine is fueled by AI, so that it becomes smarter with each use, but also human intelligence with the use of professional transcribers and editors to fact check the technology to ensure video viewers receive fully accurate captions and transcriptions.

Additional Practices for Crafting Engaging Video

When considering other best video practices, Lipps said it’s simple – just start recording.

“Just hit record. You can wait for a lot of things to be perfect – perfect lighting, perfect automation, lots of things, but I think at the end of the day, if you, again, go back to personas and the expectation of the viewer, they’re used to watching pretty bad videos on YouTube every day. So the expectation that the lighting is going to be perfect in a classroom is pretty low. I think they would rather have the content,” he said.

Lipps and Ready agreed that individuals creating video content should be more cautious though when it comes to the quality of the audio. Poor audio quality can have a negative impact on the ability to create a searchable, organizable, compliant video at the end on the day, they said.

As a final takeaway, the webinar dialogue turned to the importance of collaboration in fueling video initiatives that are effective, engaging and compliant.

Approach Video Collaboratively with Peers

Video is being used in all departments throughout the institution. Formerly, video technology or university video initiatives were run by academic technologists with an accessibility team at the table to ensure compliance. Scott and Lipps encouraged viewers to actively partner across all departments when considering the effective implementation of video.

“Partnering early [and looking] at it from not just the vantage point of the disabled community, but what the disabled community actually has in common with the broader community will create a much more cohesive and well-delivered strategy,” Ready said.

Lipps added that he has found members of the higher education community to be some of the greatest sources of knowledge and collaborators for their willingness to openly share their findings with peers.

“Higher education shares knowledge across the board with their peers better than any space I’ve ever worked in,” he said. “There is so much expertise out there in these communities of people that have gone before you. If you’re not creating a lot of video and you want to, talk to your peers. Odds are you know somebody that is.”

You can watch the full webinar on-demand now.

Back To Top