Hi, everyone. Welcome to the webinar.

We're going to get started in just a minute.

Just going to wait for the room to fill up a little more.

Okay, I think we're about ready to start.

All right. So hello, everyone.

Thank you very much for taking time

out of your busy schedule to be with us today.

My name is Ezra, and I'm

honored to be hosting this webinar for you.

We're very excited to get started, and

hopefully, we can shed some light on making

courses more accessible in

today's current online-heavy environment.

In addition to that, we'll be covering various topics,

including Universal Design for Learning,

how to keep students engaged while learning remotely,

and making sure your school is compliant with

the upcoming UK Disability Guidelines.

Today, we have two very special guests with us.

Nasser Siabi, CEO of Microlink,

and Scott Ready who is

Verbit's Accessibility Evangelist and

Senior Customer Success Manager.

We've also set aside time

at the end for questions and answers.

But at any time, feel free to write in

your questions, and we'll do our best to get to them all.

The session today is also being captioned live by Verbit.

So please feel free to activate the captions

at the bottom of your Zoom toolbar.

In addition, if you click on the little arrow,

you should be able to open the full transcript as well.

So without further ado,

I'm going to introduce Dr. Nasser

who is the CEO of Microlink.

Dr. Nasser, take it away.

Well, thank you very much, Ezra.

That's very kind of you to

accept my invitation to do this presentation today.

For the benefit of everyone,

how this came about,

I am on the Business Disability

Forum's Technology Taskforce,

and I was asked to look around about this captioning.

Everyone working from home,

essentially, has become a vital tool for everyone.

There were lots of options, and

naturally people weren't sure which one

would actually do what

they wanted to achieve, and there are

lots of differences between

the methods used by various providers.

Google has got Microsoft Teams and others.

So I went around looking at

the markets and see who's offering what service,

and I didn't want to really

bring the same mode to everyone.

I wanted to find differentiators.

So when I was looking around,

I came across Verbit in Vienna Zero Project.

I found that they're doing something very unique

and that is about the accurate captioning.

It's a hybrid between the ASR,

which is a automatic speech recognition,

and the human captionists,

the likes that we do have.

We use all the time,

clear to text as well as the AI media.

I've always said there is a place

for both rather than just one or the other

because we value human captionists,

but it's not always practical to have them.

Also, the cost is a major factor,

especially in a higher education environment

where budget has been quite challenging.

So I looked around,

I've been waiting for a while

until this integration with Zoom came about,

and behold, I had an email

from Ezra's boss saying that it's working.

Do you want to see it? I saw it. I was very impressed.

Of course, then I said,

"You must show this to the people."

I've already done this with

a Business Disability Forum and some of

the multinationals, they like what they see.

Hopefully, I would like to do

the same for the AHE sector,

very specific, and I want to

make sure they're focused on the education

because I know they do a lot more.

Verbit has been working

with corporate sector for a long time.

But this is specific for education,

and I hope you could get a lot out of this.

Thank you very much, and I'm going

to pass it to Scott who is

the subject matter expert

within Verbit. Take it away, Scott.

Fantastic. Well, thank you so much, Nasser.

Thank you for that introduction.

Just to give you a little bit of my background.

I've been in the field of

education and accessibility for over 30 years now.

Prior to working at Verbit,

I was at Blackboard and oversaw

accessibility at Blackboard for the last five years.

Then prior to that,

I was in education.

I was the director of online education and also

the chair of an interpreter training program

where students were learning to become

sign language interpreters.

Again, prior to that,

my career has always centered around access.

How can I create a more inclusive environment,

have it be a state agency,

federal projects, other corporations?

My parents were both deaf.

They were teachers at a school for the deaf in Missouri.

So I had the awesome privilege of

growing up on campus at the school for the deaf.

They had housing for faculty.

So I bring all of that to say,

I've been in this field for a long time,

and I really appreciate the opportunity to get

together for just a few minutes and

share a little bit more with you.

This time has been interesting.

We have seen some major changes

that have happened very quickly.

There's been some great responses to

the changes and the evolving needs of students.

You've heard the quote,

"Necessity is the mother of invention."

In this case, maybe not so much

invention as much as the adoption.

The adoption of technology and making sure that

the needs are able to really be met for the students.

It didn't take long for everyone to respond with

a viable solution utilizing tools

that were already at our fingertips.

But the tools and technology alone can oftentimes

create access barriers for many of those who use them.

So when we take a look at that,

there are some key items to consider.

To take a look and identify,

are there barriers that have been

created with this technology that need to be removed?

If so, how can the barrier

be removed to provide greater access?

That's with the content,

that's with engagement and

collaboration, that's in your assessments.

So when we start looking and

identifying those barriers that have been created,

oftentimes, the solution is really very simple.

We have resources that we can draw from.

Much like what Nasser was referring to in

exploring and identifying what those resources are,

oftentimes, the challenge comes in to identify

what's the best resource that we have available for us,

and that's why we're here to share with you.

When we look at what COVID-19 has done,

the next question that I hear from institutions is,

"What's the long-term impact

that's going to be taking place?"

I love to go back and look at history,

at inventions and adoptions.

Henry Ford once said,

"If I had asked people what they had wanted,

they would have said faster horses instead of a car."

So if left to our preference about any kind of change,

we probably wouldn't change.

But this situation has forced us all to change.

We're all in this together.

Every one of us has changed in ways

that we've never experienced in the past.

So now that we have this new experience,

what are the realized benefits of

this experience and how can we carry this forward?

When we take a look at the long-term impact,

being able to rely on a greater adoption of technology,

and like Nasser had said, a hybrid methodology.

A methodology that incorporates

not just people and not just technology,

but the blending of the two,

so that we can really benefit from both

of them like we do with our captioning.

Increased collaborative learning

and ownership of learning.

So like I said,

I've worked in educational technology

now for over 20-30 years,

and have experienced a lot of resistors.

I too have found myself resisting

adopting technology until forced.

For most everyone I have ever

worked with regarding educational technology,

the basic questions needing to be answered are,

why is this technology better than what I've been doing

and will this technology enable me to achieve

more or better results.

If we can answer those two questions, then yes,

education will be impacted by

the experience we've all endured with this pandemic.

So having set that groundwork,

how is captioning empowering all learners?

How is it creating a more inclusive environment?

One significant realization that

institutions are having right now is how captioning

is creating that more inclusive environment

for more than just the

students that are deaf or hard of hearing.

It provides access for

the single mother while the baby is asleep.

It provides access to the student who is

commuting and is in a noisy public space.

It provides access for individuals

whose native language is not English.

Let's share with you just a little bit of what Nasser had

referred to when he saw Verbit for the first time.

I'm going to take you through a high level explanation

of our hybrid process and

how the captioning is able to truly save

money and provide that level of accuracy.

As that file is being introduced

into the Verbit workflow,

the first step is

our automated speech recognition engine.

I want you to understand that

this isn't the same speech recognition engine

that is out in the commercial market.

This is one that has been designed for education.

By that, I mean that there are specific models that are

designed to address

the disciplines within your institutions,

so that it can more accurately

reflect and caption your content,

both live and post-production.

After it goes through that technology,

then we have two human editors that

edit the document that

the technology has already created.

So you can see right there already,

there's a huge cost savings.

There's a cost savings because

the technology is doing the heavy lifting.

But we want to make sure that the accuracy is there,

and we all know that technology

hasn't achieved that level of accuracy yet.

So that's why we've added

the two human editors to that level.

From there, those corrections are being

fed back into our artificial intelligence,

so that our technology is

continuing to get smarter and smarter,

and as we work with your institutions,

we get to know the nuances,

the specifics, the terminology,

the way that things are communicated at your institution,

so that our technology can continue

to get more and more accurate

and require less and less human editing.

From that process, then the file is delivered back to

you and provided to you as a captioned file,

have it be a live captioning with Zoom,

like you're seeing here today,

or in a post-production file for

a video that you might have created for class content.

That's the high level.

Right now here today,

since we have a lot of individuals joining us,

we're going to actually provide you

an opportunity to see the platform

individually and be able to address

the application of the platform

to your specific needs at your institution.

But what I would like to do at this point,

having talked about captioning,

how our processes are different

than other companies that are out there,

and the need for captioning to really

address inclusivity for all students,

what I'd like to do is to see if

you have some questions and to open up

the opportunity for you to to ask and to

have us share with you some

of the information that you're looking for.

So what I would like to ask

you is that if you do have questions,

please include them in the chat box here in the Zoom,

and Ezra is going to be monitoring that

and sharing with us what those questions are.

Yeah. So thank you. We actually just got our

first and now second question, both from Mike Wall.

So I'm going to answer this live.

I hope everyone can hear and see.

But the question is basically,

how does two human editors having

to listen and read the ASR transcript

actually save money over

one human captioner who also listens?

Fantastic question,.

Do you want to take that, Scott,

or you want me to field it?

I'd be more than glad to.

Go ahead.

So the way that that is able to

actually save money is that,

because of the time required for

a human captioner alone

to be able to do research to identify correct spelling,

to understand how terminology is being used,

because of the models that we're able to upload,

our technology does all of that for the caption file.

So for example, if the subject matter is biology,

our technology is already prepared with

all the correct biology terms

and the correct spelling of the biology terms.

If you have just a human editor,

and the human editor is trying

to figure out how to spell those terms,

oftentimes, they're having to stop the captions,

go to the internet,

do research, and get that information,

and come back and finish the captioning.

That time that is required is much more

extensive than what our technology is able to produce.

That's just one small example.

Thanks, Scott. The follow-up question from Mike was,

captioning does not do well with a teacher speaking

advanced level mathematical equations.

So what I'd like to basically address on that

is we actually have models built out

for every subject that

you can think of in higher education.

In regards to the equations themselves,

they will be spoken or typed

out the way that you hear it.

There's no way for us to, in a short matter of seconds,

add the whole formula that you're talking about.

But assuming that the model is trained properly,

all of that would be accurate as well.

We've also done that with statistics, calculus,

other types of classes and have been fairly successful.

The next question.

Ezra, if I may.

Yeah, go ahead.

If I may add to that, Ezra,

also specifically for that class,

if you're using a PowerPoint, for

example, for that session,

or you have notes that are specific to that session,

you're able to upload that information to

our automated speech recognition engine,

and it will be able to learn from your specific notes.

So in addition to the model that Ezra was

referring to that has already been built into the system,

we're able to take your specific notes so that

your specific formulas will be uploaded into the system.

That's a unique differentiator between us

and the other companies that are out there and available,

and one of the main reasons why we wanted to develop

our own speech recognition engine

as opposed to using commercial.

Yeah, excellent point, Scott.

Zoe Moores, if I'm pronouncing the name properly,

is there human editing during the live captioning?

How is this implemented in the workflow?

Thank you. So Zoe, great question.

Yes, there is human editing.

We actually have three type correctors

on this call right now and on all calls.

The way that that works is the ASR

will churn out something right away,

the machine captioning.

The editors would then, in order,

line up, get little snippets,

make the edits extremely fast,

and then show it.

So the follow-up question to that is from Daron Bryant,

who asked, the latency is around

5-6 seconds. Is that normal?

The answer is yes. 5-6 seconds

is actually considered extremely

fast when we're talking about highly accurate captions.

If you've ever been through an airport,

obviously not today, in today's situation,

and you see they have CNN or Sky News

or BBC up on the screens and you have the captions,

you'll usually see between 15-20 seconds of a delay,

and that's usually with a stenographer.

So for us to be able to give you highly accurate,

what we call 99 percent plus

captions with only 5-6 seconds is considered very normal.

Another thing to think about,

and maybe Scott can talk about this a

little more, is that

people who are hard of hearing or

deaf wouldn't really feel the difference

the same way that those who have normal hearing would.

So Scott, do you want to add anything on that?

Sure. I'd be glad to.

The critical part that Ezra was getting at is that

when you do have, for example,

a PowerPoint slide that would be up,

then that latency becomes

a critical factor so that they are able to

see what is typed out in

conjunction with what's being

displayed within the screen,

have it be a PowerPoint or an image

or a screen of the instructor writing out

the formulas, so that those two

are going in conjunction

with each other and supporting each other.

So by us being able to really

reduce that latency and really provide

the user the option to choose

how they want to view the information.

For example, right now, you have the option to have

just captions at the bottom of your screen,

or you can open up the full transcript.

If you're like me, I would rather have

the full transcript open so

that I can look at the screen,

go back to the transcript, and see what was being said.

If it's just the captions alone,

then you're limited to

just those few words at the bottom of the screen.

Once they're gone, they're gone.

So again, a differentiator here is that

it's not just captions for the academic setting.

For the academic setting,

you also have the full transcript

that is able to be opened along the side window,

and that is huge.

Thank you, Scott. We have a bunch of questions.

I'm going to try to get to

as many as we can in the next eight minutes.

If not, I will type them out separately.

Judith asked, so how much preparatory material

do your editors need

to facilitate their work and what kind

of material is most useful?

So the short answer is,

they don't need to do anything.

With our experience, we know that

most faculty members don't want to

take on added responsibility and may not

have the time or the know-how to upload that information,

which is why we already have built-out

models for most of the subjects.

It's just a matter of how easy it

is for our ASR and how much work that's doing.

So you will never suffer on quality for not uploading.

You will only benefit yourself in terms of the speed

and accuracy that you get by uploading.

So I hope that answers your question fully.

So Mandy has a very relevant question

and Scott will talk about this.

There is a delay, as expected with live captions,

but the sentence jumps to

catch up, losing some of the speech.

So learners who cannot hear will

be unable to fill the gaps.

This is a wonderful point, Mandy.

The truth is is this is a limitation currently

in Zoom and how Zoom runs their captions,

as we are a third party that connects with Zoom.

The simple solution is, like Scott said,

is to open up the View Full Transcript

and you can try that now.

I didn't want to say this now,

but we also are launching

a brand new product called Live Room by Verbit,

which would actually be a window that is connected to

Zoom and you would be able

to basically see the full transcript,

take notes, which Scott,

if you want to talk about that a little bit.

You ever tried reading subtitles and taking

notes at the same time, it's near impossible.

So we have that feature as well.

That will all be out June 5th, officially,

but I'm happy to show that to whoever would

like to see that in a private meeting as well.

Excellent. So again, getting back to user choice,

we all engage in different ways with

the content that we are engaging with,

or in the environments that we're engaging with.

It's critical that you have the ability

to choose how you prefer to engage with it.

So as Ezra was saying,

you have the in-Zoom in-app experience,

where you can have captioning and

the live transcript on the side,

and that live transcript is also searchable.

So for example, in an academic setting,

if the instructor refers to

a definition early on in the session,

and then later on refers to it again,

and you've forgotten what the definition is,

you can search that term and go back and find

that information, which enhances learning.

But then also in addition to the in-app,

as Ezra was referring to,

you also have an enhanced transcript

that you're able to add notes, highlight,

annotate, and then download after the session.

So it's just providing a greater opportunity for

learning to actually take place.

Can I ask one question and also make a statement?

The statement is, when we started

talking to Ezra and his company, I said,

I understand that the commercial model they have is

each customer buys an amount

of whatever the license they need,

and the more you use, the more discount you get.

Well, in principle, it's agreed that he would treat

all their higher education people

the same by giving that generous discount.

So we're going to have to press him on that.

He's made that promise, but you've got to hold

his feet to fire. So that's one thing.

Nas, you're playing hardball here.

No. It's really because I got

him to do the same with the corporate clients,

which is the right thing to do.

You're getting a collective purchasing power and,

obviously, they should benefit.

The second thing I thought might be

relevant before we lose the time,

because one other thing that really

attracted me to Verbit is

the range of platforms your solution works in,

like Blackboard Ally, like that.

Then I thought maybe you should just quickly talk about

what services you're offering

those people because I know you work for them.

Exactly. I'd be more than glad to.

Nasser is exactly right.

We do integrate into a lot of different platforms,

have it be Panopto,

video management platforms, as

well as LMSs, such as Canvas, Blackboard.

We have a range of various types

of environments that we can seamlessly integrate with.

YouTube, for example, which is

a very commonly used platform

for storing academic videos.

You might have your videos stored on Box

or OneDrive or some other Google Location.

We also integrate with those environments,

which saves you from the workflow and

the management process a lot of time,

so that you're able to then

seamlessly have that file transferred

from your location to us and

back to your location captioned.

Thank you for that. Thank you, Scott.

Okay. How is this service costed by the hour?

It's actually by the minute,

and if I'm being more accurate, by the second.

So everything is prorated,

and it basically boils down to a few different factors.

I don't want to just give

blanket pricing because we do

give discounts, like Nasser said.

In addition, we are a volume-based company.

So the more volume that there is,

the lower price per minute or second,

if you want to call it that, there is.

There's also no minimums in terms

of how long you have to book a session for.

So another huge differentiator is that,

oftentimes, if you go over your session,

then you are charged

an additional 15 minutes or an additional 30 minutes.

Not with us.

We only charge,

as Ezra said, per minute.

So you are only charged

for the actual time that you're being used.

Ezra, do we have time for one more question?

Yes. Someone asked, is

the transcription service compatible with

other platforms outside of Zoom?

The answer is yes, many.

I don't want to go ahead and start listing

off a name for fear of forgetting some,

but a ton of them, with the exclusion

of Microsoft Teams currently.

That should be changing soon,

but all of the major platforms, we are compatible with.

That's a perfect lead into the next slide.

The next question that I am sure that you all have is,

how does this process work for my institution?

Because we've already shared with you that we're

able to integrate into a lot of different platforms,

a lot of different environments,

we're able to change

our workflow to meet your workflow needs.

See, that's one of the huge aspects of Verbit that

was very attractive to me when

I was looking for my next career move,

was that Verbit does not establish

their workflow and require you to match their workflow.

We really strive to understand

what your workflow is and how can we adapt

to your workflow to best meet

your needs in being able to

provide that inclusive environment.

So because of that,

we would really like the opportunity to meet with you,

to understand your specific needs at

your institution, understand your culture,

and be able to really best design how

this is going to work with your environments.

So what's going to take place as

the next step is that Ezra is actually going to

be reaching out to each and every one of us on

this call to be able to set up a time to meet with you,

to talk specifically about

your needs and how Verbit can meet those needs.

So on the screen,

you see that there's Ezra's e-mail address

and his phone number,

his UK phone number.

So if you get an e-mail from

that address or a call from that number,

please accept it so that we can then be able to

really understand exactly what your needs are.

Yes. Okay. So we are currently,

I believe, out of time.

I don't want to hold you any longer

than absolutely necessary.

So like Scott said,

please feel free to reach out to me.

My e-mail address is Ezra@Verbit.ai or call me.

I hope we answered all your questions.

If there are any others, again,

we're happy to meet to consult,

and we are looking forward to that as well.

If I may do my final say, by the way.

Yes.

As some of you have already asked me, guests

on this webinar, about the comparison chart,

which we are drawing one out.

I'm going to send it out to

all the people who registered.

The second thing that I can disclose here,

and Ezra, I'm sure he won't hate me for it,

but I did a rough back of

a cigarette packet, if it's allowed to

use these days, calculation.

I think you could get savings of

about 30-40 percent to comparative services.

I'm not dissing any other service.

As I said before, there is value in

every provider in this sector,

and I don't think this is one or the other.

However, I think commercially looking out,

there's a powerful offering and

certainty, in terms of reliability and ease

of booking has been also

something that I have experienced on

three occasions where we

booked Verbit to do some major conferences.

The last one, we had 1,000

participants, and it went really well.

So in that respect,

not that I'm giving any endorsement one or the other,

but I think it's fair to reflect on the experience so

that people won't have

to go through that kind of a question.

Anyway, good luck, everyone,

and be as hard as you can with these guys.

They really need a bit of a

arm-twisting to give you the best deal.

Thank you very much, Dr. Nasser.

Scott, thank you for your time,

and thank you everybody else.

Please, please, please feel free to reach out.

I don't bite usually, and I'm looking

forward to presenting a little more to you.

Thank you.

Take care, everybody.

Everyone, thank you for joining. Thank you.

Thanks, everybody.