Scott, Rob, are you ready?
Ready to go.
So welcome to Verbit webinar:
The Accessible Classroom Redefined.
This is Episode 10 by the way,
which I'm very excited about that.
Today, we're going to talk about how
artificial intelligence empowers personalized learning.
I'm very excited to host today
a guest speaker from Mediasite, Rob Lipps.
We're once again joined by
our Accessibility Evangelist, Scott Ready.
Rob, I would turn over
to you so you can introduce yourself.
Hi. I'm Rob Lipps.
I work for Sonic Foundry. We're the makers of Mediasite.
I'm coming to you from snowy, cold Minneapolis.
So I trust most people
are probably in a warmer place than me.
But I look forward to this topic.
I spend a lot of my day talking about this very topic.
So I'm looking forward to
the webinar and thanks for having me.
Perfect. Sounds good.
Scott, would you like to introduce yourself?
Sure. I'd be glad to. Welcome, everybody.
I'm Scott Ready. As Michal said,
I'm the Accessibility Evangelist here at Verbit.
I've been in education and
accessibility now for over 30 years.
I'm joining you from
partly cloudy but 80 degrees in
Hilton Head, South Carolina. Sorry, Rob.
Okay. So I'm just going over real quick on the agenda.
So today, we're going to speak about the
Changes in Media Consumption and The Student Experience.
Then we will move to
Solving The Accessibility Problem with
artificial intelligence Technologies.
Then we're going to speak about
Increasing Engagement: Best Practices
for Stronger Video Results.
Then you will have time for questions and answers.
So during the webinar,
if you have any question or anything at all,
you can feel free to write on the Chatboard in the
Q&A, little icon that you have on your screen.
Then at the end, I'll read out
the questions and we'll have some answers here.
So Scott, I'm turning this to you.
Fantastic. Well, let's talk
a little bit about media consumption
and the student experience.
But before we look at the student experience,
I want to ask you all a question.
How many of you prefer to go to
a paper manual if you have to fix
something or learn more about an item?
Or how many of you
prefer to Google it or watch a quick video?
Have you all noticed how
when you're purchasing something nowadays,
and I'm speaking as
a gentleman that has seen the transition here,
but have you noticed how more and more items do
not include a manual when you purchase them?
Rather, they provide you
a website and on that website, it's typically
a combination of either text instructions
and/or a video to go to.
How did we ever survive prior to YouTube?
If you need to fix something or you need information,
have you ever said just,
"Let's look it up on YouTube"?
I know I have. I had part of
my lawnmower break and I didn't know how to fix it.
So I looked it up on YouTube, and sure enough,
there was an instructional video there
telling me how to fix my lawnmower.
This is how we take in information today.
So why would the educational environment
need to be so drastically different?
So Rob, with that,
would you share with us how learning continues to evolve?
Yeah, I'd be happy to.
You raised some interesting considerations, Scott.
The one thing that I've noticed is
certainly, you spoke to more the consumer side of video.
We've all seen the explosion of YouTube
and there's a lot of videos being published to YouTube.
They're not just instruction manuals
from the manufacturer of the lawnmower.
It might be somebody that has
your lawnmower, found a problem,
fixed it, and published a
video showing how they did it.
But the consumerization of
video has resulted in a lot of video being
created and people are very comfortable pointing
their camera away from them and
recording a video and putting it up on YouTube.
When it comes to turning that camera
around and pointing it at yourself,
it's a little bit different.
I think that that explains maybe
why some of the enterprise space and
the academic space has been on
a different timeline in terms of how
they've evolved in creating video.
That is certainly changing.
A big part of why it's changing is the
demographic of the student
or the person looking for the information.
That expectation of everything should be on video.
We're talking about learners that have
never known a world
where they didn't have video for everything,
and it can be a bit disruptive to them to show up in
an educational setting and not have access to
the video that they're accustomed
to through the rest of their lives.
So we'll speak to that a little bit about
how that's evolving and
the creation of content in higher education is exploding.
We see universities creating 50, 60,
70,000 hours of video a year from classrooms,
not just from desktops and phones,
but actually recording of lectures and publishing.
That presents some interesting challenges
for universities for how they treat
that content and what the expectations
are for accessibility and other things.
Not just all learners,
but learners with specific needs and disabilities.
So all of that has created a bit of an opportunity.
So when we're talking
about student experiences and
taking preferences into account,
one thing that we notice is that students are
used to companies knowing an awful lot about them.
They're used to Netflix knowing a lot about
them and that Netflix model of learning.
If I could just go back one slide, Michal.
The Netflix model, it doesn't concern
students, for the most part,
that Netflix tends to know what
they're looking for before they look for it.
I think they want that experience also in education.
They don't want to necessarily a search
as much as they want systems
to suggest things to them before they search.
A lot of that insight can come from
AI tools that are in common with
accessibility tools and are
being used in accessibility today.
So now, next slide please.
So keeping up with that expectation,
I think, is certainly difficult,
but if you think about the way that video has disrupted
the teaching side of
how information is presented to students,
AI is disrupting how administrators
administer technologies in universities and policies.
It's raising questions about
what type of data are we
collecting and how we're using it.
It's a huge opportunity.
The one thing I'll say is AI engines used in
technologies like you all have at
Verbit are very hungry technologies.
They rely on a lot of content coming into them.
A key aspect to that that universities are realizing
is we need to automate the way this content is created.
It can't just be produced on every desktop.
It needs to come from the room
that information's being shared in.
The more we automate the create side of the video,
the more video gets created,
the more AI can learn,
and the more accurate the results can be for
the learner and the more personalized delivery can be.
Then students will start to
have those expectations met by these schools.
So it's been a very interesting.
We actually did a survey with
University Business where we
polled university leaders about
what their expectations are of AI.
The first thing we noticed is
very few universities are using AI at all.
Most of them are hopeful that it's going to do
two really important things:
create that personalization of delivery
and the second thing it's going to do is contribute to
the accessibility compliance component
of what they're doing.
The more video you create,
the more that accessibility need blooms.
It becomes an interesting opportunity to help
these solutions come together and
not just help students with
disabilities, but all student learners.
It's interesting. Rob, you
talked about personalized learning.
Personalized learning has been a buzzword
for years now in education,
and trying to really provide
that student what they need
in order for them to consume the information,
in order for them to demonstrate that they
have the ability to carry out that information.
I remember back in the LMS days,
the learning management systems,
and how they tried to provide
an LMS that would be more personalized.
But yet in order to do that,
it became so cumbersome upon the instructors to
try and develop all these various learning pathways
that it was almost defeating
in trying to create that personalized learning.
But I love what you're talking about in
bringing in the artificial intelligence,
and allowing that artificial intelligence to then help to
create those learning paths
for that personalized learning.
There's another article here that I wanted to point
out to that's a CAST study.
CAST is Center for Applied Special Technology, C-A-S-T.
In their study, they stated that
learning is like fingerprints,
that there is no two brains that are alike,
and the concept of
neuro-variability is really
important for universities to understand,
which gets back to some of
the statistics that you were referring to, Rob,
that it takes into account that
each learner brings a unique blend of
experiences and expectations to each learning unit.
When we look at universal design
for learning, for example,
and the framework really accounts for variability of
that meet different learning preferences,
not just for those with
varying abilities or disabilities.
I also like the article that Thomas Tobin has
that states that "More and
more students are time shifting,
screen shifting, and place shifting their studies,"
and using their devices to
further connect to their campus,
their professors, and to each other.
So again, the consumption,
the way that students are learning is
definitely different than when I was in college.
University faculty are therefore
charged with the task of creating and maintaining
enhanced connections with their students. Rob?
Could you share a little bit about how AI is
really impacting the academic video area?
Yeah. AI is still
a bit aspirational for most universities.
So if you're watching this
webcast and you're thinking you're
the only person that hasn't embraced AI,
I think you should take comfort in knowing
that most of your peers have not, very few have.
Many are very interested in it.
I think there's a lot of questions around
are AI engines different and how do we use them,
should we be partnering with companies that have built
AI into their technologies and then use it that way.
I think that that's a great approach.
But I think there's certainly more questions than
answers than there's more opportunity
than probably people originally thought.
I mentioned earlier a survey that we
did with University Business about AI
and the academic video specifically.
This graphic illustrates a bit of that,
and this highlights that
I think there's a bit of uncertainty around how
to apply it and how it can be used.
But there's more consensus
on what we think we're going to get out of it,
and that is the personalization.
The beauty of it is that this webcast
is really sponsored by you.
We tend to talk a lot about
accessibility and these and that
certainly is a driving factor,
but the needs of
the disabled community can have a tremendous impact,
positive impact on
all learners of all abilities, as you said.
I think these are convergent initiatives.
They're going to come together and
technology and technology providers,
like our firms, are going to play a huge role in that.
But when you see some of these responses,
I encourage you to go find the study,
we have it available on our website as
well, and read through some of the responses and see,
the top two desired outcomes
that people stated from the survey
are like suggested playlist or suggested videos to watch,
and the second one is
moving the needle on
accessibility compliance in some way.
Those are the top two hopes for
outcomes for these types of AI technologies.
The good news is video vendors like us
are getting together with caption
providers like yourself that are using these technologies
to bring this solution a bit closer than it's ever been.
So it was very interesting,
and I hope we repeat this study again in
a couple of years. It is interesting though.
I think the volume of
video is going to drive some of this.
It just has to because the smart people today say
60 percent of all network traffic
on the Internet is video,
and in just two years, it's going to be 80 percent.
It's phenomenal. The video is certainly exploding.
It's getting created, and it's
getting consumed, and it's watched.
People will want AI engines
to make that search easier
and make those suggestions possible.
Well, this is another research study
that also demonstrates the same results.
When we look at these research studies,
they all point to student performance improves,
content material is reinforced,
focus is maintained, and comprehension
is enhanced by having captions added to video.
Much like what you were saying, Rob.
It's not just for the deaf and hard of hearing any longer.
It really is something that we are all consuming.
If we look at how we engage with video content,
have it be even TV,
when we walk into a restaurant,
when we are at the airport,
when we're looking at
Facebook on our mobile phone, mobile devices.
Did you know that 85 percent of video content on Facebook
was consumed silently in
watching the captions on the videos?
People are not listening to it.
They want the captions or they want the captions
in addition to the voice,
so that there are multi-modal input.
So we are spending significant amounts
of time and dollars creating excellent video content.
But isn't the goal really for the learner to engage,
comprehend, and retain that content?
If we approach this from a design thinking approach,
and I'm a huge design thinking advocate,
we would start from
the learner, the humanistic perspective,
and first of all,
identify how they are going to consume
the content even before
we designed and created the content.
That way, we're able to
provide that content in a way that
the students are first and foremost going to take it in.
Rob, can you share a little bit with us about
how the approach is at Mediasite?
Yeah, sure. I think
the important piece to remember is that
there's a thoughtful approach to how you manage
the life cycle of a video to
improve the odds that
an AI type of technology is actually going to help you.
It's important to think about
people's perspectives because many people
have that consumer video perspective,
like the Facebook video that you
mentioned that most people watch on silent.
I don't always turn captions
on in a video that I'm watching, but if they're on,
I almost never turn them off,
and because I actually
read faster than I listen, which seems strange.
But I think is, people have said that,
that is a tool that when it's there,
it helps me a lot.
I actually find myself wishing
that the video moved a little faster when
the captions are on because I'm
comprehending quicker what's actually being said.
So I think I'm not alone in that.
I wouldn't think that that would be the case,
but how you manage
the life cycle of a video can have
a huge impact on
the quality of that experience
that you're delivering to the user,
whether it's a playlist,
a video and how that's being organized,
whether it's how quickly the video was made available.
So in our company, when
we look at the life cycle of video,
we'd look at all the things that you see
in your screen from capture.
We've always had a focus
on not leaving capture to chance
because you only get one chance to record a video
and how you record that
video plays into some
of the things that we're talking about.
Not every video is
going to come out of the other end looking the
same depending on considerations
that were taken for capture.
So we take great pains to make sure
that we can automate the capture,
but that we automate quality capture
that's high enough quality,
particularly on the audio to improve that result.
Then put it through a transformation process where there
can be data enrichment across the board.
Not just accessibility and
speech to text, but data enrichment
because when you have
that text file coming from the audio file,
you can organize the content better, you can deliver it,
it's more dynamic, you can have
the captions on, certainly.
But all of the other things you can do with the data
that's derived through this transformation process
creates an engagement layer with your students that
allows you to analyze and
predict the outcomes of what is
the video initiative doing for
the quality of the learning experience.
Are outcomes improving?
Do we have better relationships
between professor and student?
Do we have better relationships
between student and student,
or student and content,
professor and content?
So all of these different engagement pieces
can be improved throughout this process.
So we look at it all
and make sure that in this video platform,
one of the key elements in
that transformation box is
what I would call data enrichment,
which is that captioning and speech to
text because that text
could come from an OCR scan of content
that's sitting on the slide as well in an image.
So there's a lot that can be done there.
Yeah. I really like what you said
there about increasing engagement.
Could you share a little more with us
about what you've experienced in the industry and
some of the best practices to
increasing the results of engagement with video?
Yeah. Well, the first advice that I
give to people is just start recording, just hit record.
You can wait for a lot of things to be perfect:
perfect lighting, perfect automation, lots of things,
but I think at the end of the day,
if you, again, go back to
personas and the expectation of the viewer,
they're used to watching pretty bad videos
on YouTube every day.
So the expectation that the lighting is going to be
perfect in a classroom is pretty low.
I think they would rather have the content.
Where I think people
need to take more caution
with is the quality of the audio.
Do we need ambient microphones
capturing every single thing
happening in the room and does
that have a negative impact on
the ability to create a searchable,
organizable, compliant object at the end?
All of those factors come into consideration.
More microphones may not be as good as less microphones.
The types of microphones are interesting.
So I say just hit record
because most of the stuff you have to learn by doing it,
and you can get advice,
but until you start recording and seeing
what content looks like coming out of
your rooms, talk to your students,
see what they think of the content,
how quickly it's become available,
and if you can't afford
cameras in every room, that's okay.
Record the content in the audio.
It's just as important.
You can integrate the video of the speaker later if
that's a budget or
a technology constraint or
that you have in the environments that you're in.
But certainly, start recording content,
I would say, is my first advice,
and then you can focus on the quality.
You'll have experience, you'll
understand what works and what doesn't work,
you can learn from your peers.
One thing I've learned in working
in higher education technology for
14 years is that
higher education shares knowledge
across the board with their peers
better than any space I've ever worked in.
There is so much expertise out there in these
communities of people that have gone before you.
If you're not creating a lot of video and
you want to, talk to your peers.
Odds are you know somebody that is.
Then I think personalization.
This is more making sure that your video initiative and
your policy around video
matches the expectations of the student
and that personalization of experience is there.
Underlying all of this, I think, is AI.
You had some really good comments
about not just implementing AI,
but helping AI learn along the way is an important piece.
Exactly. Yeah. Helping that artificial intelligence
to learn by providing the artificial intelligence
the content that it needs in order
to understand what is being captioned, for example.
In our world,
our automatic speech recognition engine that was
developed within Verbit, because we couldn't find one
commercially provided that was accurate enough,
so we developed our own
automatic speech recognition engine
coupled with artificial intelligence,
so that our artificial intelligence is
feeding into our speech recognition,
so that it's becoming smarter and smarter all the time.
So it's that content that's being fed in,
have it be a glossary of terms,
have it be a syllabus
or digital content that's being fed into
the artificial intelligence so that as
the automatic speech recognition
is going through its process,
it's already gained intelligence
prior to the task at hand.
So knowing how to feed that
artificial intelligence so that you're able
to optimize the best results.
Again, Rob, like you said,
sound is so critical.
If I'm sitting there listening to a video
and there's a lot of static or there
is a lot of background noise and I'm having
a real hard time discerning what the speaker is saying,
I'll click out because it's frustrating.
But yet, I can put up with
less quality video, but not sound.
Sound is something that I
expect to be able to hear clearly,
and when we start looking at technology,
automatic speech recognition engines and
artificial intelligence, they need
that good quality sound to be
able to really discern what's being said.
Yeah, and I think training is having a system
that there's positive things that
universities can do to improve
the odds that the outcome's going to be good and
that training wordless things
that are contextual to medical content,
or legal content, or
whatever the type of source content is,
it makes a big difference and it's something to think
about that I think providers
like yourself can understand how to apply
those customizations to help the technology,
help the universities much
faster than they otherwise would.
Mediasite originated in
the late '90s as a phonetic search indexing tool.
Before we had lecture capture and all of
the life-cycle management stuff that I talked about,
we were at phonetic indexing tool that
could listen to phonemes and make them searchable.
What we learned 20 years ago was that
if you don't have a large enough repository
of things to search against,
the odds of you returning
a non result or
a poor search results is actually pretty high.
When you think about AI engines and
the appetite that those things have to learn,
they're doing what humans do.
They're just doing it faster.
So if you're creating 3,000
videos a year today and you start thinking about AI,
that AI engine's going to want 30,000 videos,
or 50, or 60,
or 80,000 videos to
take and to make the
most of the potential that can come from that,
and that obviously introduces
a challenge with accessibility compliance mandates
like are coming with the EU,
but a mixture of
technology and premium services from providers
can help get along way there toward
achieving that level of compliance
and benefiting everybody in the process.
Exactly. I also think about accents,
how various professors will
have various accents because we're
a very global, international educational system
and we have professors from all over the world,
and how artificial intelligence can get
to the point where they learn the accent in
order to better accurately
feed in that information as to what's being said.
Whereas an individual, a human being,
oftentimes, if you're changing from human to human,
they just don't have the capacity to be able
to make that kind of comprehension
or that transfer of information from human to
human where artificial intelligence has that ability.
It aids us in being able to
truly deliver more quality education.
One of the things you just
mentioned that reminded me,
we've done a lot of work with universities on how to
optimize the audio in a room to
improve an ASR result, for example.
One of the elements in there, I think, that I
underestimated wasn't just a technology around the audio,
but partnering with faculty in the room
to introduce the idea of making
subtle changes to the way they speak or how they
deliver the content can save a university a lot of money,
the cadence of how they speak, and maybe
overcoming some of the language obstacles
and things like that that just in
how the content or the materials being
presented into that audio source
can make a big difference.
That is an element of how we can make all of this better.
Well, that sounds like a great 'nother webinar
for us to offer here in the near future.
There's a lot of study,
a lot of data being collected,
a lot of analysis being done on decisions
you can make at the source of content and what that does.
That's a partnership, not just
with academic technologists and us,
it includes faculty and those professors are
willing participants in making sure that
the content serves the needs of everybody.
So one thing that Rob just
touched on about partnering with faculty,
you're attending this webinar and your role might be
in a disability department
or your role might be more text-centered,
you might be in the media department.
Really, video is cross-departmental.
Video is being used in
all departments throughout the institution,
being used in the IT,
over into the communications or the public relations.
All departments are using this.
Rob and I would like to encourage you that
if you're here from your institution
and you're seeing it from
one departmental or one perspective,
we would encourage you to
share this video, share this webinar,
share this information with others
throughout your institution that might be taking
a look at how content is being
consumed from various other perspectives.
Yeah. It used to be that
a video technology or
a video initiative at
a university was run by academic technologists,
and then the accessibility team was
at the table as a compliance piece primarily,
and I think if you're in that disability department,
partnering early on to look at it
from not just the vantage point
of the disabled community,
but what the disabled community actually has in common
with the broader community will
create a much more cohesive
and well-delivered strategy for sure.
These are things where
you're inviting people to the room,
to the table to give their input much earlier,
and partner them with the solution
versus reacting to a problem.
Depending on which perspective you
have with what's on the slide right now,
seek those people out early, sooner than later.
It doesn't have to be
an enforcement-like conversation as much as
I think the common interests between all of
these departments is there.
It's interesting. I know there has been many times in
my academic career that I've said,
"Boy, I really wish that they would have
just asked me before they
delivered it or asked me before they finalized it
so that we could have really built it into
the design stage," and then to find out that
it's really a lot cheaper to build it
into the design stage than it is
to try and to be reactionary
and to fix it after the fact.
It always looks like an add-on
when we have gone in and tried to fix it after the fact.
So when we can really build this
into the overall design and the protocol,
the processes within your institution,
Your workload, it doesn't become
a crisis moment of having to
go back and try and to react to.
But rather, we're able to be
proactive and really look forward as to how
we're going to plan and make sure that we
are providing the best that we can for our students.
Absolutely. We're seeing this now, which is a great sign.
The trend is in this direction for sure,
and also bringing together
the financial side of that topic,
the accessibility budgets together with
the technology budgets and trying to figure out how
they can serve a similar purposes. That's happening.
How's the questions coming in?
Yeah. One question can start.
So there is video and there's video.
How would AI distinguish
between good quality video and poor quality video?
Well, it"ll see it in the results,
and I think this is why those departments getting
together and talking about
how to solve these problems together
because certainly, we've obsessed about
the capture side when it
comes to just speaking about lecture capture in general.
We, as a company at Mediasite,
we've stressed the importance of capture because
that initial opportunity you get,
that only opportunity you get
to create that file will ripple down.
Those decisions ripple through the system and
the outcomes are determined by that first choice.
It does have a high dependency on good quality audio.
The question is, I think, in some minds,
from my perspective and
the conversations that I have with customers,
they're pretty narrowly focused around,
"We want to record a lot of video.
We're worried about compliance.
How do we use technology to solve some
of the economic side of the compliance issue?"
Like is there an ASR engine
out there that is better than others?
Much like the question that was just asked,
it depends because the same ASR engine
that you try with one piece of content could be
totally different with another one
because the quality changes,
and to normalize that quality,
to understand who's better
or what service you get, I think,
this is why the ASR technology
still needs a human element to be fully compliant.
From the disabled community,
and I have a daughter who has a disability,
her disability is vision, not hearing,
but I know from being a part of
that disabled community that
their expectations are very high.
If you're a person that's deaf or hard of hearing and you
read an ASR caption that's even 80 percent correct,
it's surprising how bad 80 percent really looks
when you see it on the screen.
So I think there's always a mix of
people and technology that are going to do
this even, if the content is good.
It's a work in progress,
but there's no magic
solution out there that can make all content
have perfect speech to text results and
create the perfect personalized
experience for all your learners.
It's going to be a tweaking exercise where
you're fine tuning your system
between quality and the solutions
that you're using to deliver it.
But the good news is
technology companies and service companies
like us and [inaudible] and others are getting together
and trying to figure out how we can bring together
solutions that work because there's
a lot of ways to create a speech to text file.
They're not all useful.
Eventually, they could be so
inaccurate that they really
become useful only for search,
and even then, they may not
be totally accurate unless, as Scott said,
you've provided some contextual training
or wordless or things to optimize the search.
It's a great question
and it still depends.
What are your top 3-4 tips for faculty in
creating and using video in their courses?
I'm sorry, the top three tips?
Yeah. What are the top 3-4 tips for faculty in
creating and using video in their courses?
Yeah. I think the first thing I would say is,
back to the little list of
recommendations we had, is just start recording.
Just hit record, and honestly,
don't overthink it until
you get some experience under your belt.
Advice to professors specifically,
to instructors at a course,
the good news is video has gotten
way more casual, way less formal.
You used to spend
15 minutes worrying about how clean your office was,
and is the lighting right,
and now, we've moved on now to
where the timeliness of
the content is more important than
the production quality of the content,
except when it comes to audio
because Scott is absolutely right.
A grainy video,
somebody could watch that video from start to finish.
But if there's a tick in the audio,
or there's an artifact of some kind in the audio,
it's such an annoyance to the listener
that they're way more likely to shut the video off
and not finish it because of
the audio than because of the video.
So I would say find a quiet place,
don't worry about the video,
and just make sure that your information is timely,
and make sure that if you have
an opportunity to do it through
video instead of text, you should try it.
Let the video system create the text for you,
and add to it that way.
There's no system that's going to
create a video from your text,
but you can go the other way.
So I think, dive in
and get started is
probably the best advice that I can give.
Rob, I really like the fact that
you pointed out that we're less
formal in our video consumption nowadays.
If this is an online environment,
for example, we're looking for a way to connect.
If we look at how social media has
provided a way to connect globally,
from individual to individual,
the same thing happens in education.
Students are wanting to connect with their instructor,
and the video allows them to
make that connection by personalizing it,
by having it be less formal,
have it be some opportunities
for the conversational type of exchange to take place.
So don't feel like you, like Rob said,
you have to have a planned script and it be
a formal presentation in order for it to be videotaped.
It doesn't always have to be live.
I think the vast majority
of the video that we do is on-demand.
There's time and places for live.
Capturing a lecture in a classroom, I believe,
is the first step to getting comfortable with video,
and the reason I say that that's a good first step is
because if I'm comfortable at the front of the classroom,
and this is my sanctuary,
and the door is closed, and I'm in my element,
whether that camera in the back of the room is on or
off, it doesn't really matter.
But if I'm sitting at my desk in front of
a web cam, like I am now, it's a little weird,
and I'm not a broadcaster,
and this isn't my element.
I would much rather be looking at
all the participants in a room right now if I could.
I'm more comfortable with it now because I do it a lot,
but that's where I think automating what you deliver,
just recording what you already
stand and deliver today is a good first step.
Then the other advice I would give
is watch your own videos. Watch them.
Don't just assign them to
the students and let them watch. Watch your own
because if you have a fixed camera in
a classroom and you keep walking
off the camera because you pace,
certainly, you could put tape on the floor.
Or I almost guarantee if you watch the video,
you're going to have an awareness
about things that you can do
that change how you
present yourself on the video because it's different.
Certainly, the outcome of
the recording's a little bit different.
One other simple tip that I used to always
recommend when I started doing
webinars and recording is
to get a face of somebody that I want to talk to,
and make a copy of that face,
cut it out, put it on a stick,
and actually put it behind my camera
so that I could at least look at
a phase when I'm talking,
rather than just a green dot
that indicates that the camera is on.
So I would engage with that face,
and talk to that face, even though that face is paper.
But still, it would provide
that opportunity to see a smile that's looking at me.
I would also say,
I know we have other questions coming in,
I would say resist the temptation to
use everything that technology lets you do,
like questions and answers.
We're a technology company, so we're guilty
of complicating how much you can do.
I would say just start simple, do a video,
just the fact that you recorded
a video is hugely engaging.
Whether or not you have Q&A or polls
is something you can do down the road.
But you can turn a video
into something that is overly complicated too.
Just because you can doesn't mean you should, I guess,
is my advice there. Keep it simple.
Next question. What are
some suggestions for making class video more interactive?
We have tackled you are making plenty of videos,
but they are often not being watched
once their videos have access to them on Canvas.
The first thing I'm going to say
is to make it searchable.
Yeah. None of us want to be in a class for,
say, 50 minutes, and then go
watch a video for 50 minutes.
We want to be able to go to that video and
identify just those segments that we want to re-watch,
or the information that we want to be able to
then pull from that video to later look up,
or to find out what
the assignment was that I'm responsible for doing.
So making it searchable
would be the first thing that I'd recommend.
Yeah. That's a great point.
I think the other thing I would suggest is to ask
your students what about
the videos are interesting to them, and what are not.
Yeah, exactly. [inaudible] is suggesting
to use polls, and surveys,
and videos to engage viewers,
which I think is a nice idea as well.
Yeah. It's all of those things, I think,
help create engagement between viewers,
which is also good, especially if
your viewers are not in the same classroom.
The other thing I would suggest is, besides
asking the students what they think, is
look at how quickly the videos are available.
A lot of times, students want to
re-watch something the minute
they walk out of the classroom.
They want to sit down on a bench and pull up the lecture
they just walked out of and
hone in on something that they think they missed.
So how quickly that content
becomes available can impact viewership.
We find most lectures are viewed within 48 hours of
the actual lecture being
held and that's an important piece.
The other thing I would say
is the idea of
the flipped classroom really
came about because of that issue is,
well, if the lecture is going to be recorded anyways,
if it's the same thing, is it less valuable
than if something is pre-recorded
and then the classroom itself
is a bit more engaging because
the content was delivered in advance
of the class, not at the class.
So you don't have to flip
something completely upside down
and do your class that way,
but you could flip certain pieces of it.
I've heard an accounting professor that will record
four or five minute pre-lecture recordings before
the actual lecture to create
some familiarity to drive
the engagement even in the class,
not just with the video.
Another question. Do you recommend
face-to-face instructor recording and
posting in LMS for all my section students?
I think so.
I think the LMS is a great place.
I think it's difficult to compete with
where student's eyeballs are going on any given day.
If the LMS is kind of that,
it depends on the university.
Some universities, the LMS
is not the center of the universe.
But I would say
wherever it is that
you expect the students to be
spending their time, put the videos there.
If that's your LMS, that's where they should
go and then give them the ability to search,
not just the videos that are embedded in there,
but everything else that
they might be looking for. That way,
some of the results they return could be
documents and some could be videos,
some could be lectures,
some could be third-party videos.
Maybe a YouTube video
that was used as a classroom supplement
also appears in there as a link.
I think all of those things are designed
to not create confusion from the student
on where they're supposed to go to actually
see the video and if you
create a video platform for
the video and an LMS for all the text,
then the student's supposed to know
which one they're supposed to be in.
I think that that creates confusion and
that'll reduce your viewership.
In an online course,
do you recommend weekly video content?
Is that once, twice per week?
Well, I always think more is better
because, I think, if you think about
how many times a professor has
a conversation with a student,
there's opportunities to create video that aren't
just lectures or delivery of academic content.
I've seen professors live
stream office hours because they were tired of
having 50 students standing
outside their office asking the same question.
So they said,
"Let's do office hours in a lecture hall and stream it
live and then when that first student asks
the question that 25 other students
have, they all hear it once."
So there's lots of ways to create touch points,
I guess, throughout the week that vary
depending on what the type of communication is.
It doesn't have to be
a lecture and I'm not smart enough to
advise any professor on how often they
should deliver content to their students.
I think students will
probably tell you they'd rather have
more three-minute videos than
160 minute video outside of the classroom.
I think when it comes to lectures,
I think they just want the whole lecture,
not because they want to watch the whole lecture,
but because they can search
within it to find what they're looking for.
But outside of the lecture,
I think shorter is
generally considered better.
Even if it's a longer topic,
break it up into
smaller, consumable videos that
they can get through quickly.
We tend to see those types of
videos get the greatest viewership and
my assumption is if the viewership's high,
then there's value.
How about a gold nugget of information
every day in a three-minute video that goes out to
your students that gets them
engaged with that thought process that keeps them
engaged so that it's not
just the classroom environment that they're studying.
the frequency really is going to depend on the content.
Like Rob said that the short video information
that keeps them thinking about processing, engaging,
applying, all of those learning,
pedagogical approaches, video is
an excellent way to do that.
Yeah, and I think
probably because compliance with
accessibility is a big deal,
it's always a concern of universities,
should we just do less video if we
can't afford or we don't have a solution for compliance.
I would say, tackle that conversation head on,
talk to your vendor of your accessibility vendor,
talk to your video vendor,
try to figure out a solution that affords
more people access to
video, including the disabled community,
but meeting the needs of both.
I think that that's
a multidimensional conversation that
has to happen and if you're
thinking doing less with
video is a positive step toward compliance,
I think nobody wants that.
Everybody wants more video,
everybody wants more compliance,
everybody wants engagement with
the disabled community and disabled
with learners that have different ways of learning.
There's no two people that are exactly the same.
So the amount of video that you
record can only help if it's greater.
Yeah, and then finding ways to make
sure that it gets viewed is,
I always say, "Ask the students."
Found out what they think.
It's amazing what they'll tell you if you ask them and
you can learn a lot for sure.
Thank you very much, guys.
I appreciate that a lot.
Rob, thank you for your time. Also Scott.
Thank you all the attendees
of the webinar.
We will send a recording very short for you to watch,
and search, and learn more,
and share with other people.
Thank you so much again for attending.
Have a great rest of the day. Thank you, everybody.
Thank you for hosting. Thanks.