So welcome to our next session,
unique ways YuJa improve media accessibility.
I'm excited to have Nannette Don from YuJa,
a video platform being used by many leading university,
here with me today to make a deep dive into the topic of
inclusion and accessibility material.
Just a reminder, you can submit
your question live through the Q&A feature here in Zoom,
and we'll address them to the end of the session.
We are featuring Verbit live
integration with Zoom today that
enable you to view
a live transcript through the session.
So to enable the transcript,
you can just click the arrow next to
the "CC" button on the bottom menu bar,
and then click "View full transcript".
So this is something new that we are
launching. You can give it a look.
Nannette, I'll turn it over to you to
introduce yourself and talk more about
how school can increase
accessibility of the media that
they're using. So go ahead.
Very good. Hello everyone.
My name is Nannette and I am a Director at YuJa.
Essentially, my core responsibilities is making
sure that all of the features and
workflows that we commit to our clients are delivered
appropriately and supported in a good way.
We do offer full-scale video products.
So today I'm actually going to focus
on rather than all the components of our products,
how our product helps institutions
improve their accessibility workflows
within video and digital assets.
So that's going to be my main and core focus and I'm
looking forward to talking about that with you.
The first topic I have today is just to talk a
little bit about what is YuJa.
So I'm going to go ahead and stop my video so I
can take a look at my notes while I'm doing this.
But essentially, YuJa is
a full-scale product with multiple components.
If we can move to the, "What is YuJa's
slide", that would be great.
So essentially, when we're looking at YuJa,
what we focus on is scalability within
recording, within video management.
We do also offer a proctoring solution,
digital accessibility in terms of your digital assets,
which I'll talk a little bit about
because that is relevant to our topics,
and digital asset management.
So first, I'm going to talk about
the slide around our comprehensive video platform.
So just to give you a little history of
how the platform was designed,
one of the initial steps we looked at
was how to make sure that
video is being delivered in
a compliant way to all of our users.
Section 508 Compliance Standards
were established in the 1980s.
They began to grow
in adoption in terms of the standards in the '90s,
and now what you're seeing is
quite a bit of a moving target when
it comes to these standards.
So we do have auditors that look at
our compliance on a regular basis.
So we're always focused on those compliance standards.
So in terms of accessibility,
what we tend to focus on in
our comprehensive platform is making sure
that we're compliant with
the principles of universal design.
So when you're looking at
how to see whether or not you're
fundamentally making your videos
accessible to your users,
they first have to be perceivable,
meaning that the video content is
visible to all regardless of disability,
hearing impaired, visually impaired, what have you.
It also has to be operable,
meaning that the interface has to be
able to be navigated by all.
So that's where things like keyboard shortcuts come in,
audio description, things like that.
It also has to be intuitive and simple.
Meaning that if your student has
a learning disability or they're just
not intuitive when it comes to technology,
that it's easy for them to operate
and use as they're watching their videos.
Then also has to be robust, that
means that it has to be scalable.
Regardless of the workflows that your users are
requesting when it comes to being able to access content,
that it's actually reliable for them to operate on.
So I'll just give you a background about
the core products that we have.
One is for lecture capture and video recording.
Actually, I'm still on the previous slide.
Now, before recording was
established in classroom settings,
the way that accessibility worked was
there's quite a bit of a human element to it.
Meaning that someone was hearing impaired,
they had to have someone sit in
the class with them to take notes.
But recording has changed the game.
Recording allows folks to be able to
record their lectures and then have all of that
documented precisely so that
the person who was
in the class that may not have been able
to take notes very well can
access that recurrently for their learning purposes.
Then, in terms of organizing
your assets as an entire video platform.
The purpose of the video platform
should be to be able to allow
folks to access those videos
in an accessible way regardless of their disability.
So as these are moving targets in
terms of federal standards, we're constantly moving,
and creating, and designing
accessible functionality within our platform.
So the next one I'll talk about is
auto-captioning, which is the next slide.
With auto-captioning, what you'll find is that,
this is something that is intuitive for folks.
Of course, auto-captioning is
something that people look at when it comes
to video to support the hearing impaired.
But it's much more operable
than just supporting the hearing impaired.
I'll talk to you what I mean.
So with the hearing impaired, of course,
I talked about how having
those videos of the lectures
really makes it easier for them.
They don't need a human element
to sit next to them and take notes.
They can simply re-watch the video and the captioning
and be able to recurrently learn that way.
But it also becomes more operable for
everybody who's encountering the video.
Accessibility just doesn't mean for disabled,
it also means for all.
So having that searchable metadata
available to all of your users
allows for learning to be much more available to them.
It's a known fact that
not everyone reports their disabilities.
So the fact that you're able to scale
with captioning and auto-captioning to all
of your users is going to be
very important because this is going to
matter to quite a bit of your viewers
regardless of their established disability.
Then, also captioning makes
the video just more adaptable.
It makes them able to access content
much more easily as they're reviewing it.
An example of this is with the auto-captioning,
as that's available to all of your videos,
it also in our product creates transcripts.
So there are going to be students
who are much more visual,
they're less auditory, and so they're
going to want to read what's
being said on the video rather than necessarily watch it.
So these transcripts that get
created through the auto-captioning helps support that.
Then also, for people who
may be a little bit slower to learning,
it allows that content to be pinpointed and searchable.
So having the ability
to really find that specific metadata within
the video supports them being able to go exactly to
the points of their learning
to help them learn more efficiently.
So next, we'll talk about human captioning
in the next slide, and
why people use human captioning
rather than just the auto-captioning.
So there are going to be situations where you have folks
that really have a designated disability
for hearing and they're hearing impaired.
In those classes, you're going to need to deliver
99 percent-plus accurate captioning.
Auto-captioning, even with
the best artificial intelligence, which we do have,
gets at the most, 95 percent,
typically, 80-95 percent accuracy.
So if you want to deliver
that ADA-compliant human captioning,
you would need to be able to integrate
with human captioning partners.
So what we do is, we have one-click
human captioning request capabilities.
You can configure your own turnaround options,
which are going to be important because
depending on the instructor,
they might have different preferences.
Then you're going to be able to track
those requests as an administrator.
I'll give you an example of where
human captioning is important.
So you might have those classes
where you need those 99 percent captions right away,
and you have an instructor that's pretty much hands off,
and you don't really have a way to edit captions,
you don't have an intern or
a student to do that right away.
So integrating with a human captioning third-party vendor
will allow you to get that
within a day and deliver
that 99 percent accurate captioning to your users.
So the next slide, I'm going to talk about editing
So as video begins to scale online,
the challenge that a lot of institutions have is,
we do have those students who require
that 99 percent captioning right away.
But we also have the rest of
the students who need those videos to
feel available and captioned as well.
So to scale that it's very difficult to do that
with human captioning because it can get quite expensive.
So if you have really good workflows,
for example, you have instructors
that are willing to edit those captions,
or you have a team of
student interns that are in-house,
then our caption editor
can make an out-of-the-box compliance strategy
at a very cost-effective way.
So we have two ways to edit
captions within our video editor.
One way is you are able to click
on the bar of captions and edit them right in line.
For those who prefer to go line by line,
we have a sidebar caption editor where you can rewind to
specific parts and edit that specific entry.
So if you can think of the scenario of perhaps
having a student intern editing those captions,
it becomes a really efficient way for them to do that.
We also do have the ability to manage specific roles so
that if you don't want the student to do
anything but be able to edit
those captions for that video,
you can apply that role to them so it maintains
the security of your video.
So that's more of an out-of-the-box compliance strategy
with a video editor in our 85-95 percent caption editing.
In case you don't want to of course pay for
every single video to be
human captioned because that can get quite expensive.
So the next slide,
I'm going to focus on the Accessibility Dashboard.
So when thinking about the workflows for accessibility,
we also thought about how
are these administrators going to manage it?
So I'm going to go through the six pieces of
functionality that are fundamentally
on our Accessibility Dashboard.
The first is of course,
to manage caption requests.
So let's go back to the human captioning.
Let's say you have different departments
who have different human captioning vendors and
they have different budgets for that. In
our ability to manage caption requests,
you can manage those individual providers
for human captioning.
You can allocate that human captioning to specific users.
So that way that only those users can make
those requests and it makes it much more
manageable to make sure that you're
spending your human captioning
budgets to the exact right places
that are supporting that human captioning.
The second is, also to be able to manage
caption requests for the auto captioning.
Let's say you don't want to
necessarily make that feature available to everyone.
You can actually also manage
caption requests on an auto captioning basis as well,
which is important because maybe you don't want
that available to your students.
Integrating with multiple captioning vendors so again,
on a department level you're going to have
people that will be using different vendors,
so you're going to want to be able to manage that
appropriately within your institution.
Managing caption settings, so for example,
we have things like profanity filters,
that's important because obviously with auto captioning,
it doesn't get everything right.
So you want to make sure that if it doesn't get it right,
that there's a profanity filter in place in case
it allocates it to
an unsavory word that you don't want for your users.
So we do have those in place
to be able to manage within our caption dashboard.
You can also add caption dictionaries.
You can add caption dictionaries on
an individual video level. So for example,
if you have a class that's being taught
and there's a specific name within
that video that's very uncommon,
you can add that name into
the caption dictionary and apply
it to that specific video.
So that way instructors can
increase the efficacy and the accuracy of
the auto caption, and you can also add
caption dictionaries on an institutional level.
So let's say you have a medical department where there's
specific terminology that's very
specific to that department, you can add
that caption dictionary on that level so
that way it learns as
people speak about that specific terminology.
This is really helpful for training
your auto captioning workflow
so that way when people are editing them,
they find that they're actually editing
less and less because you've
trained the auto caption dictionary successfully.
All right, so the next slide,
I'm going to talk about multi-language captioning.
So essentially when we looked at captioning,
we said, people aren't just captioning in English,
they're capturing in other languages.
So for example, you may have
foreign language classes where they're doing
introductions, let's say in
French to their family and you want the caption
to be able to capture that in
French so that they can review it for later and
perhaps even read the transcripts they'd like
and use it for their writing.
This is a really convenient feature.
You can multi-language caption
on an individual video level.
The multi-languaging caption features also
fuse with artificial intelligence
just like our English captioning.
So it gets that same level of accuracy and you
can also edit that in the caption editor as well.
I have quite a few customers,
especially in the foreign language
online departments, that utilize this again,
for generating, not just the auto captions in
their videos for students submissions
or their own lectures,
but also using this as
a basis for people to create transcripts
to review for later and using their writing as well.
All right, so the next slide.
So I talked about auto caption dictionaries,
but let me explain a little bit
more why this is important.
It allows you to multi-level train your auto captioning.
Think of auto captioning as
something that you can improve over time.
Now we've of course had
auto captioning in our product for many years.
So the improvements of
the efficiency and accuracy of that has
been improving on our own, but as
you add these specific tags,
these specific words, you're going to really be able to
customize that auto captioning
to your specific dictionary.
Let's say like your school name
or things specific to your institution,
as people utilize the auto captioning more,
you can be more ensured
that there are going to be able to
customize it to your specific needs of your institution.
In the next slide is other accessibility tools.
So in the next slide for other accessibility tools,
what we'll focus on is,
where are your tools expanding?
So of course, you're going to need things like
keyboard shortcuts, audio description support.
I'm just checking because I don't
see the new slide online.
I just want to check with Maya.
Did you change the slide on that?
Sorry, I was on mute. Yeah, it's
probably not the new version.
So sorry about that.
It's audio description support slide.
Got you. So with audio description,
there's been a lot of focus on the hearing impaired
when it comes to compliance.
But what about the visually impaired?
So with audio description support,
essentially what happens is,
you have an ability to add audio tracks to your video,
and those audio tracks will really
support your ability to support those visually impaired.
With the audio tracks you can upload them to the video.
They can be synced in a specific time sequence,
and it allows a narration of what's actually
happening in the video for those who can't see it.
It's also a good tool for those who perhaps want to
just add narration to their video to assist their users.
But this is another feature when considering compliance,
is that it's not just about the hearing
impaired but you also have to
support the visually impaired as well.
I'm sorry, I think I got reverse.
So that wasn't you, that was me on the slides.
Then the next will be the other accessibility tools.
This talks about moving targets.
So when you're looking at accessibility,
there's always going to be a new
standard created, and that's a good thing.
That's because new technologies will come out to
support your users and whatever needs that they have.
Of course, with our product,
it's not just limited to captioning or
caption dictionaries or even audio description.
You have what we call responsive design.
So as your users adjust their screen views,
whether they're on tablets or
their computer or their shrinking their
window, we have a very responsive design for them to
be able to view the information appropriately.
Keyboard shortcuts, now these preferences,
they're not just limited to people with disabilities.
We actually have medical students
who love the keyboard shortcuts.
They have quite a bit of
content that they have to move through,
so they don't have time to use their mouse.
They're just moving around with their keyboard shortcuts,
utilizing components within our video.
for those that need the ability to
have their screen read because they
can't visually see it themselves.
On top of the fact that we have automatic captions,
we also allow you to upload all caption languages.
So whether it's in Chinese or Arabic,
you can upload those captions to
our product and they will deliver in the video
the same way our English
or other auto captions do as well.
So we do support different languages within our products.
So just know that
as I would expect for full scale video products do,
that the product that we
deliver is constantly scaling because of the fact
that we are working
with compliance auditors and constantly looking at
the different standards and making
sure that our product meets
those standards for accessibility
in the federal regulations.
Then last but not least,
I will talk about a product that we are
introducing to the market called Panorama.
So this is a little bit different than our video product.
It actually adds to the comprehensiveness
of the digital accessibility that we offer.
So essentially with Panorama,
what it does is it integrates
right into your learning management system.
I'll explain why people utilize products like this.
So if you think about going online,
then that means that a lot of
your content is going to be delivered online,
which means that they're usually digital.
So with digital content,
you can imagine that institutions have millions of
digital assets stored in
their learning management systems.
So there was a point
when things weren't online like they are now,
where that could be managed in
a fine way with
the accessibility experts within the institution.
But now that you have much more content,
technology is the only
way that you're going to be able to
scale that content throughout your institution.
So with Panorama, we
integrate with your learning management systems.
We automatically generate
accessible formats for your users.
So what that means is that your instructor
will upload a piece
of content into the learning management system,
whether it's Canvas, Blackboard,
to a Moodle, and then our system will do the rest.
It will automatically generate
the various files, for example,
braille file for someone who is visually impaired,
a high-contrast file for someone who is color blind,
and it will be there, so that way, it doesn't limit
the accessibility of that file for anyone.
Anyone who has an accessibility issue
can access that file based on their needs.
We also, in Panorama,
have the ability to deliver reports and
insights on how your accessibility
is scaling within your institution.
So what a lot of our customers will do is they'll
utilize this tool first to of
course convert to accessible formats.
So that way, they're assured that throughout
their institution [inaudible]
more accessible formats for everyone.
But then, as they improve
those accessible formats over time,
they'll begin to assess whether or not on
an individual department or institutional level,
they're delivering accessible content appropriately.
So our reporting also allows people to
be able to judge the efficacy of
their accessible technology so they can
justify why they utilize it within their institution,
and also let their users know that they're
doing everything they can to assist them,
to make their digital assets accessible.
So products like these are going to be very important,
whether it's YuJa or somebody else,
to ensure that as you're delivering
video and digital assets at scale,
that your users are getting
all the advantages possible
to them based on the technology available in the market.
So Nanette, I'm going to leave a few minutes
for questions from our excited audience.
Sure. So in summary,
I know this is a little bit of
a weird time given COVID-19, but
as you are moving online,
the first consideration, of course,
is just getting the content out there so that
your students can learn and you can fulfill
on your semester goals.
But then the next consideration is,
is that content perceivable?
Is that visible to all?
Can everyone access that content appropriately?
Is it operable? Can anyone
navigate that content and learn from it?
Is it intuitive?
Is it something that people
can really access without having to
do too much learning because they
already have other things they have
to deal with in terms of whether they're instructors,
teaching courses or they're
students just trying to finish their semester out.
Is it robust? Meaning that, is it
reliable enough that regardless of
the accessibility issue or
request or even just the adaptability of this content,
people can be able to
access it with the features that you're offering.
So in summary, those are the things I want to consider.
Nanette, I want to take this point. Sorry to interrupt.
I want to take this point because I see
a lot of questions about this area,
so I'm trying to shrink them to one question.
First of all, thank you so much.
It's amazing to see how YuJa depend on
accessibility and accessibility is the star.
I think that you guys are very advanced on that.
There are a lot of question about,
it's really obvious to see that the online learning is
ramping up in the last few years,
and of course, it's still ramping
up even without the COVID-19.
We see a lot of online learning programs
even for the to top universities.
A lot of questions as part of what,
from your expertise, what do you think that
the future of captions will be in the online learning?
I think the question is,
how big is the captions for,
not only for hard of hearing and deaf students,
but for all. And this is one of
the topics that we are going to discuss in this summit,
how captions can benefit to other students.
So what is the future that you are seeing?
Yeah. So I think what we're seeing,
and let me just turn on my camera now
since I'm talking to people.
I think what we're seeing in
this space is that people are beginning to see
that the captions aren't
just an advantage for those who are hearing
impaired, as I spoke about a little bit earlier.
That it creates a really robust set of
metadata that you can utilize for
everything from search to
helping people develop in their learning,
to reading transcripts, to even things that we're doing.
artificial intelligence into the way that we've
delivered these transcripts and these captions
for our users where we have things like word clouds,
where we summarize the content
of what's being said in the video, we
deliver it in a word cloud and we also deliver it in
a structure of super topics and sub-topics.
So that when people are learning,
they have a foundation for what they are about to see.
So there's quite a bit of
innovation going into auto captioning.
You're also seeing that in live captioning.
So we do also live stream.
So there's innovations moving forward in AI
when it comes to live captioning
to make that a more automated process.
But in general, I think to answer your question,
the efforts in accessibility just don't make
content accessible for those
with disabilities or report disabilities,
they make it accessible for all.
These features and these sets
of captionings actually helps everyone.
We've seen that again,
across the board with all of our students.
A lot of students report that they love
the ability to be able to curate content,
pinpoint content to
the exact specification of what they're looking for.
We have students that they remember
something that the instructor said and they
do a really specific search query through all of
their content and they're able to
pinpoint that content right away.
So I think what you'll see is you'll see that
people are going to begin to see the advantages
for themselves regardless of their disability.
But you'll also see the improvements in the AI of
the automation within accessibility, if that makes sense.
Definitely, and I think this is
really something that we see in Verbit.
It's exciting to see the raise of
the captions in need for everyone.
One small question that someone asked,
and I think it is important just for
maybe potential future user of YuJa,
if the video already has captioning,
is it still required to providing a separate transcript
on the webpage if the video already has captioning?
Yes. So our products
specifically will generate a transcript.
It's also infused with artificial intelligence.
So what that means is it's not
just a raw transcript where it just goes line by line.
We work to break up the cadence of
the speaker through paragraphs and punctuation.
So they'll get a transcript automated that they can,
if you enable it, they can download
it and review it offline.
Great. Okay. I think that's about it.
People who writing points about, yeah,
we will see increase on captions and I think
that you really put there a very good point
about how other students that doesn't
necessarily need it accessibility-wise
want to use captions.
I think that any of us that is a parent,
we used to see everything with captions
and not even the volume doesn't have to be even on.
So I think this is really the world is going.
I think we're out of time but if you
have one last point that you want to talk about,
it was very interesting. I feel like we
need more than 30 minutes with you Nannette.
Just remember that a lot
of folks don't report their disabilities.
They don't want to
tell people that they have a learning disability.
So to deliver an accessible platform,
it just doesn't serve those who you know of,
it also serves a wide variety
of people that you may not know them.
So that's why it's something that in the future,
you're going to need a partner,
a vendor that will partner with
you and really innovate with you and
understand these accessibility workflows to
make sure that this content is operable for everyone.
So that's what I would just leave you with
since we're running out of time.
Thank you so much, Nannette.
I think we all find it very, very interesting.
Again, this is a whole world to learn,
but it's very exciting to see how
YuJa is focused on accessibility,
and I think that it's very interesting to see
where the world is going in through caption wise.
I agree that we are going to see
more caption content over time from time to time.
So I think that this is
the time for the next session now.
For anyone that are attendees,
once you leave the Zoom meeting and
return to the Agenda page,
and then you can see the next Zoom session.
Nannette once again, thank you so much.
Have a safe and healthy day.
You as well. Bye-bye.