So welcome to our next session.

I'm excited to have here Jeff Rubenstein from Kaltura.

Kaltura is a video platform being used

by probably most of the leading universities in the US.

We're going to take a deep dive

into the topic of video and

how to better engage students with

it. Just as a reminder,

submit your questions live through

the Q&A feature here in

the Zoom and we will address

them at the end of the session.

I'm sure, as Jeff said,

I'm going to be tracking it.

We're featuring the Verbit live integration with Zoom,

so you're going to see closed captions the second

that we enable it through the settings at Zoom.

It's fixing; it will be any second now.

So what we recommend is because we're utilizing humans,

enable it with the "View full transcripts."

You will be able to see closed captions, but we suggest to

use the "View full transcripts" and it will

pop up in the right side.

So, Jeff, I'll turn it over to you.

Introduce yourself and talk

more about the trans-Kaltura sequence.

Fantastic. Thanks, Jaques, and

thanks very much to Verbit for hosting this, and again,

to the folks behind the scenes because they

are right now scrambling to make us all look good,

or I can only assume.

Let me go ahead and start my share,

and let me get up the slide deck here and minimize that.

Now you should, in just a second,

have my deck, I hope.

Let's see. Beautiful. Jaques, can you confirm

you have my deck on screen?

[inaudible] Yes.

Perfect. All right. So, folks, thanks for joining.

As I said, my name is Jeff Rubenstein.

I'm the Vice President of

Product Strategy for Education at Kaltura.

I've have been in EdTech quite some time

at a variety of companies building

technologies for teaching and learning.

The thing I'd like to talk about today for

the next 20 minutes or so is about accessibility and

how you can facilitate it with

video in your courses, and why you should facilitate it,

and some points on doing it at scale.

This is me, this is

my e-mail address, and my sadly empty Twitter feed.

But if you do need to reach out to me, please do.

I put my e-mail address in the chat window as well.

So don't hesitate to reach out to me if I can be of help.

So first off,

and this is probably

something everyone here already believes,

but to me, the goal

"accessibility" is much bigger than just

helping out those who may have hearing problems or

information processing challenges

or vision problems, etc.

I think, and frankly,

just meeting accessibility guidelines

is actually a really low bar.

It's relatively easy to do the bare minimum,

but that's not a very good goal.

I think the real goal of accessibility

is to make sure that you provide

tools that help everyone learn better.

And certainly accessibility costs money.

There's an investment aspect there,

but there are smart investments in

accessibility that do improve comprehension for all,

and the more of those that we can

do and the more we spend our resources there,

the better job we'll do for everyone.

So that, I think, is the goal.

One thing about video is

lots of folks think of video as a challenge,

because they think about adding captions, for instance,

after the fact in

a piecemeal way or in a one-off kind of way, and then they

often wait until they have a deaf student in

a class and then they have to go back and

caption everything and it's expensive.

So it comes out as more of a problem.

But I think if you look at it holistically,

video actually can be

the solution to a lot of those problems, and in fact,

it can be a higher-value solution

that contributes more to

everyone's comprehension than other kinds of documents.

That's the accessibility

that we want to be spending money on.

So a couple things about video.

There are tremendous inherent advantages

to video for accessibility

certainly as opposed to just sitting in

a traditional lecture hall or seminar room,

just because video, by

its nature, can be viewed over and over again.

So as many times as somebody wants at the time they want

when it's best for them

in the environment they want

with the equipment they want

at the speed they want.

This is inherently an aid to

everyone's comprehension, that you

can watch it again at half speed

with the equipment you prefer

in the place you prefer

at the time you prefer.

These are tremendous advantages for everyone.

Also, the fact that video can be indexed and searched

so you don't have to be

noting down where in a lecture

something was mentioned because you can search for it.

You can say, find me the spot in

the video where this was talked about.

You can also, in fact,

track what people are consuming,

how often they're consuming it,

which aspects they're consuming.

Are they reading the closed captions?

Are they reading them in Spanish as opposed to English?

That can actually help you figure

out how you can reach learners better.

One more thing about this:

we've found and we

haven't been able to do a study on this yet;

it's anecdotal at this point.

But we have found anecdotally that if you

use lecture caption in your classrooms

that a lot of the need for

some other kinds of accessibility aids drops.

So for instance, one of the more common services

that schools deliver to students is note-taking,

because if you're in a class

and you have any kind of auditory or speech

processing challenges or ADD

or various other sensitivities,

it's very hard for you to follow the lecture and take

notes and comprehend everything,

even take notes at speed.

So very often, schools are hiring a note-taker

or paying another student

in the class to take good notes to share,

and this is not really

a very high-value kind of assistance because it

only helps one person and

it's operationally very difficult to arrange.

So what we found is that if you use lecture capture

and then it's captioned for everyone,

which helps everyone comprehend, then it can be

viewed at will as

often as someone wants at the speed they want.

Not only does that help

all the hearing-enabled students comprehend better,

but it relieves the need for note-taking services for

those students who don't

necessarily comprehend as quickly

or as easily as everyone else.

So you can take the spend that you would be

putting on note-taking, which is an inefficient

kind of spend, and put it on a kind of accessibility

spend that actually benefits everyone.

That's the way to think holistically how to

improve comprehension for all using video.

So a couple things about Kaltura Video

in particular, though again,

a lot of what I've said applies to any video you use,

but there are things to think about

when you're thinking about this process.

First off, you can have

these captions translated into other languages.

So if you want English and Spanish,

or English and French, etc.,

to help reach students that

might learn better in their native language,

that's a service we can provide on the back-end.

Kaltura Video can also come with attachments.

That is to say that

you can upload a document,

Excel, PDF, Braille, etc.

and it lives with

that video, so that wherever the video is published,

those documents go with and

the student can download them along with that video.

So for instance, maybe there's a diagram of a triangle.

A blind student couldn't see that

in the video frame, but they could download it to

a piece of software they use that can read

that document and let them

feel what is being displayed on the board in,

for instance, a geometry class.

You also, as you see, have the speed selector,

so the student can

play it back at whatever speed they wish.

We have multi-stream capability,

as you can see here on the right.

So this can be used not only to have it

present your window at a presentation screen,

in fact, we can support up to four of these streams.

One of them can actually be

a sign language interpretation stream

that you can present along with the video.

We can also do multiple audio tracks.

Why is this important?

Again, for somebody who is blind or has trouble

seeing because they can't see what's in the video frame,

they need somebody to describe

what's happening in the video frame.

The teacher is at the board,

the teacher has drawn a right triangle with sides A,

B, and C, etc.,

and that goes as an alternate audio track,

and the student can listen to

that track alongside the main audio track of the video.

Then of course, because we're an open API-based system,

we can integrate any other

video enhancement technologies that

come out from any core technology companies

that do this kind of work

like Google or IBM.

IBM is coming out with things that can

detect facial expressions in videos,

detect gender mix in a classroom.

So other things like

accessibility apply here, for instance,

some people have photosensitive epilepsy.

It would be wise for us to run videos through

a system to detect if there's

flashing things going on that might trigger an episode.

Now, we may not ever build that ourselves,

but if that exists in the ecosystem,

we can integrate it with Kaltura Video.

Okay, time is flying, I'll speed up.

Please take your time, and we're late, so people

[inaudible] . It's an interesting question,

so please take it [inaudible]. You have time.

So thinking about the elements of accessibility,

whenever you look at a project,

whether you go with Kaltura or somebody else,

here are the things to think about.

And you may be already on

this journey in some way, shape,

or form, and I hope this helps you move

forward as you work on this.

First off, there's the player itself.

You, of course, need a compliant player.

It's got to have keyboard controls

so people who can't use the mouse necessarily.

It's got to have the proper ability

to have the contrast set

so you can distinguish

between the frame and other things.

Most players are fairly good at that.

The standard these days is WCAG 2.0 AA for most players.

If you don't know what that is,

look up Web Content Accessibility Guidelines,


That's the thing that

most web technologies measure themselves

against, and that's

an important aspect, obviously, in accessibility.

Also, the player should have things like speed controls,

should have things like

attachments, should have, obviously,

the ability to close captions

and ultimately audio tracks in the ideal case.

So then you have what's in the video itself,

in the video that you record, and, of course,

you want the video itself to have captions

and all attachments that

you add to that video should themselves be accessible,

so don't attach a non-accessible Word doc

or non-accessible PDF to that.

Then you want to have a lot of metadata, and ideally,

the system you use will help you create that metadata to

indicate things about the video

like what's it about and

who's in it and who published it

so that you can then search

for a video that was recorded by

Jeff about biology and

talks about the endoplasmic reticulum.

That's also an important accessibility

issue, again, for all,

because you want to make

these things easy to find and easy reference.

Then ideally, you have a system where somebody can search

for any bit of metadata about

the video and be able to search within the video,

which is something else that the captions

provide. Because captions, by the way,

as opposed to just transcripts,

captions are linked to a certain spot in

the video where that phrase is said.

So if you can search in the captions

you can then jump right to the point in the video where

that phrase is being

said and find the bit of information you want.

So that's stuff that hopefully

you do in the video creation process.

Step 3 is video enhancement.

So these are things you can do after

the fact to make the video more accessible.

So things like not just captions,

but also translations on alternate audio tracks.

Other kinds of metadata enhancement,

for instance, you can start

auto-chaptering videos with various technologies,

including some stuff that we give you out of

the box that will automatically

create chapter markers that are themselves searchable.

So for instance, in a lecture on the cell,

you would have a chapter on the nucleus

and a chapter on the Golgi body

and a chapter on the endoplasmic reticulum.

That's all the biology I have.

Then a student could search for just that bit.

I want to go to the spot right away which

talks about the nucleus, etc.

Then of course, there's the question

of how you do this in a way that's

efficient and maximizes your spend.

Here, there are a lot

of features that you should consider.

You'd ideally like to have

a rich transcription editor, or rather caption editor,

that I'll show you on the next page.

But this is to do the minor tweaks at the end

like how someone's name is spelled.

If I spell my name S-T-E-I-N instead of I-E-N,

a caption system is not going to probably get that.

So you want to be able to fix that

by a global search and replace.

But importantly, you want to think about reuse.

A lot of the things that are produced

by professors are 98 percent the same year over year.

It's like a lot of textbooks;

they have a new edition, but they only change two pages.

So if you can give yourself a plan for reuse and

have a system where

they facilitated the ability for you to reuse,

maybe with minor updates,

then you can save yourself a lot of time and

processing because the accessibility

features you built in last time will still be there,

and you can just reuse that asset rather

than starting a new one from scratch.

Then there are other things that will

help out if you have the ability in

the system you're using to delegate the ability to edit,

say, to TAs, or you can even crowdsource fixes,

and also, really importantly, delegating spend.

Part of the problem right now is that

the workflow that most schools use is

they do nothing about

accessibility and professors create

their courses as they normally do,

which are mostly inaccessible,

and then they enroll

a student in that class who has the need.

Then there's all this hair-on-fire running

around behavior, saying,

"Now we've got to go back and caption everything."

This is not the ideal way of doing it.

One good way you can fix this

is you have a set of rules on what kinds of

videos have to have which level of accessibility,

and these do vary, and you should

consult your legal counsel

on this because I'm not a lawyer.

But typically, the lectures have to be fully accessible,

whereas potentially,

other kinds of videos don't have to be.

For instance, maybe a

discussion board that might have some video in it.

Again, talk to your lawyer.

And then what you can do is, once you have

this plan of levels of accessibility,

then you can delegate spending authority,

and the system, ours for instance, will enforce that.

You can say, "Okay, Professor so-and-so,

you get X dollars, and Professor so-and-so,

you get Y dollars."

Then they are the ones who could hit the button to say,

"Caption this or translate that."

You can actually delegate that

down to a department or even to

an individual professor to

spend on these kinds of services

and that's something that we facilitate for you.

We talk about scaling,

the ability to do this with video, which is

really how you put something like this into practice.

By the way, one of the next sessions is

by a professor from Cal State Chico.

They are a customer of ours and they

input in place a massive system to do this at scale

which is really quite interesting,

and I'm not sure if she'll talk about it,

but if you have Q&A, I recommend it

because they've done a masterful job,

and they're a very large institution

at doing this in a way that makes sense.

So when it comes to scaling,

and we'll end on this,

you want to put in place some workflows.

Workflows that you can

use to auto-caption things

that have certain requirements.

So if videos are created in a certain course

or in a certain category

or by a certain person,

you can actually have a workflow that says these get

captioned or otherwise enhanced

based on metadata that

can get put in when the video is created.

So it's not an after-the-fact process.

Reuse as much as you can, rather than re-authoring.

You can either have people fix their own,

and on the right,

we actually have this closed caption editor,

where you don't just see the closed caption,

you can actually go in there and fix it.

Not only can you fix it, you

can do things like global search and replace.

So if a name is wrong,

with one click, you can fix it everywhere.

You can delegate this to a TA, for instance,

or to grad students or what have you,

by role in the system.

So these are all ways you can

actually make this a lot more efficient.

Again, the goal here is video really

is something that can

solve a lot of accessibility problems.

It's a way that you can make better use of your spend.

Again, I think the spend should be spent on things

as much as possible

that have value for everyone,

and spending on video does.

It actually reduces the need for these kinds of one-off

spends that aren't the most efficient use of that money.

The thing to do is to put some thought

behind how you can scale it,

how you can make it work in a holistic fashion,

and that's how you can get the most value out

of your accessibility dollars with video.

So with that, I can take a couple questions.

Jeff, first of all, amazing presentation.

It will be good to understand in the sense,

are you able to give another example on accessibility?

You use spoke about the California Chico.

Can you expand more examples that you have

seen that you admire their example maybe?

I'm sorry I missed the last word.

Yeah. So if you can give another example,

another use case of things that you have seen,

because you gave names that I think that in our use case,

we'll even give it a

little bit more endorsement of what you're saying.

If you can give another use case?

Oh gosh. Chico is one great example at scale.

Almost all of our universities

are doing this to some degree.

We certainly include auto-

captioning with our service

out of the box to make that easy,

but I also love,

for one more example,

Mayo Clinic. Actually it's a hospital,

but they actually run an accessibility lab,

because what they're doing is they're

actually using Blackboard,

they're using LMS and Kaltura

to teach patients how

to care for themselves postoperatively.

So as soon as somebody has their knee replaced,

for instance, they get put into a course.

A lot of these patients are less technically inclined.

Many of them are older, and therefore

have a range of needs to make this stuff effective.

They have used basically all of

these features to be able to really

effectively communicate with those learners

in a way that I really admire.

Perfect. So, Jeff,

as we have our sessions running, I think it's great.

Thank you so much for this.

Please, everyone note, we have two sessions now running.

One, the Making a Campus-Wide Impact

and the Maturity Model for Higher Education.

Please note, next Monday,

we have an Ask The Expert

on how to go online in this transition.

Jeff, this was an amazing session,

really. Thank you so much for it.

It's my pleasure, and

I hope you all enjoy the rest of today.

Thank you so much, Jeff. Good one.