Up Next

The Difference Between Open and Closed Captions & How it Impacts Students

Back

Today’s universities and online education organizations strive for inclusivity and accessibility in order to spread knowledge and create career opportunities for all, including those who are deaf or hard of hearing or aren’t native speakers of the class’s language of instruction. As distractions increase and information is conveyed faster than ever, adding captions to your content can easily scale the positive impact your organization makes on students. 

Today, let’s talk about making the most of captions. Specifically, let’s explore what is closed captioning, what is open captioning, the difference between the two, and what each of them can contribute to your organization.

 

What is Closed Captioning?

The meaning behind the term “closed captioning” is that captions are disabled by default. Users must activate the captions in order to view them, usually via remote control or by clicking a button. On YouTube, for example, the button reads “CC,” which stands for “closed captions.” Just as users can activate the captions, they can easily deactivate them by clicking the same button again. To dig deeper into what is closed captioning, let’s take a look at some examples, and review what organizations must take into account when they choose to provide viewers with this option.

 

Closed Captioning Examples

There are different types of closed captioning. Closed captioning might be used on TV, available for people to activate them as they wish. Whether someone is new to a language and needs some support in understanding a movie, or if their kids are asleep but they still want to catch up on their favorite show, closed captioning is helpful in a variety of scenarios. 

Similarly, e-learning platforms might want to provide captions to improve learning, but leave it optional so students can choose for themselves whether to use the captions or not, depending on their learning style or their particular situation. Got an hour on the train, but no headphones? Instead of wasting time, catch up on an online course with closed captions, then turn them off if they distract you at home.

Sometimes closed captions are used for other purposes. For example, on YouTube, it might be used for SEO (search engine optimization) purposes. While viewers of a specific video might not need the captions, they are used by Google and other search engines to “read” them, and gain a better understanding of how to rank the video, making it easier for more viewers to find it.

 

Pros & Cons of Closed Captioning

As you can see, there are many pros to closed captioning, especially the fact that it’s there for those who need it, but isn’t intruding on the user experience for those who prefer to be without it. The ability to turn it on and off as needed provides the flexibility that empowers a customized, personalized experience.

Flexibility isn’t only for end-users. Closed captions are integrated into the video through an additional file, so if an error is found in the captions, but the video is fine, it’s much easier to fix. The mistake can be edited using closed captioning software or by a closed captioning service provider, without removing the video from the platform it’s been published on. That said, there are different types of closed captioning files, and not every device and media player supports them all. Even if they do, the captions’ fonts are determined by the player, meaning you cannot guarantee that it will look readable.

Another significant challenge is that people who are deaf or hard of hearing may feel excluded, or even miss some important information if captions are not activated right away, or if the font is not readable. And they’re not the only ones, as combining audio and text increases the learning impact for many people, even if they hear well. If a student isn’t used to using captions, it might not even occur to them to activate them, leading to potential loss.

 

What is Open Captioning?

Open captions are burned into the video, meaning they’re part of the video file just like the audio and the visuals, and viewers automatically see them. There is no way for viewers to turn them on or off.

 

Open Captioning Examples

The videos featured on Facebook are a prime example of open captioning. The videos are automatically muted, so the only way to fully understand the content without turning on the volume is by reading the captions. Similarly, if moviegoers watch a film in a foreign language and captions simply appear on their own on the TV or theater screen, that’s open captioning. If a video is set to be screened in front of a deaf or hard of hearing the audience, it is best to use open captioning to ensure everyone in the audience gets the most out of the content.

 

Pros & Cons of Open Captioning

It’s easy to see the difference between open and closed captions because the pros and cons are almost complete opposites. Primarily, choosing open captioning helps people who are hard of hearing feel equal. It is much more inclusive and beneficial to think about their needs in advance, instead of dealing with potential technical challenges at the beginning of the video, which might make them feel marginalized and less valuable to the course or university. That feeling can be worsened if the font isn’t very readable. 

With open captioning, universities don’t need to worry about different types of closed captioning files, and whether or not they will work. Open captions are a part of the video file, and universities have control over visibility ahead of time. Professionals in the organization can review them and ensure everything works. Similarly, students don’t need to worry about it either. While activating closed captions might seem simple to those of us who work with technology on a regular basis, if students are not tech-savvy, universities need to work extra hard to ensure it’s not an effort for them to turn captions on.

It’s all about knowing your audience. If the majority of the students don’t need captions, and the captions end up hiding some important visuals, or there are students that get easily distracted by additional options, captions might end up interrupting the user experience. In this case, it might be best to present closed captions and work with a closed captioning software or service provider with proven experience, to ensure that those who need them have a seamless experience that doesn’t interrupt those who prefer uncaptioned content.

 

What Open and Closed Captions Have in Common

As described above, there are significant differences between open and closed captions. At the core, what matters is that the captioning equipment that universities and online education platforms choose helps students reach their learning goals quickly, in a way that’s cost-effective for the organization. Although there are multiple options in the market, often, organizations must compromise on something, including speed, cost or accuracy.

Luckily, this is not always the case. At the risk of a shameless plug, Verbit provides a combination of open and closed captioning software, powered by artificial and human intelligence. Two professional transcribers review every single automated caption created by the software. The feedback from the human transcribers is embedded into the system and fed back to the core speech-to-text engine via adaptive algorithms, allowing it to continuously improve. The more an organization uses it, the more successfully it adapts to the organization’s needs, the faster students get accurate captions, and the more cost-effective the whole process becomes.

The bottom line? Students become happier and more engaged, as they keep reaching their learning goals. This means they are far more likely to become brand advocates for the university or education platform, spread the message of its great service, and help the organization grow.

 

Up Next

How Long it Really Takes to Transcribe (Accurate) Audio

We’ve all been there. You strategize an impressive game plan, complete with a detailed schedule, and then life happens. A coworker gets sick and you need to do extra work, a client or student needs some more of your time, your transcriber tells you that it’s not possible to get your backlog of video and audio files transcribed as quickly as you thought. Sometimes it’s OK. Sometimes your organization or department has time for surprises. It has time to wait. 

But on other occasions, there is an e-course platform launch coming up, or a growing number of students are griping about not getting their transcripts or captions in time for finals that are just around the corner. At times, this can be critical, and an organization could end up with hefty fines or even lawsuits due to non-compliance. To avoid these scenarios, it’s important to figure out how to get results fast. Specifically, it’s important to understand realistic transcription turnaround time, what options in the market tend to deliver faster average transcription time, and how to this happen without compromising on transcription quality. Today, we’re answering these questions to help make life easier for education organizations

 

How Long Does it Take to Transcribe Audio with the Available Options in the Market?

How long transcription takes is dependent on a wide range of variables, and each university or education platform needs to choose the best fit for its organization’s specific needs. To make the choice easier, the following is an overview of common options in the market.

 

How Long Does it Take for Individual Transcribers to Transcribe Audio?

When it comes to individual transcribers, the average time to transcribe one hour of audio is approximately four hours. But, some transcribers quote four hours as the minimum since it can easily reach 10 hours. Transcription time by audio hour varies so much mostly because each audio file is different. If the audio quality is great, there is only one speaker, the speaker’s accent is familiar to the transcriber, and there are no new professional terms or obscure book titles to research, it can go smoothly and quickly.

However, it becomes more challenging to gauge how long transcription takes if the audio quality is low or if there are background noises. Similarly, if there are multiple speakers and some of them don’t speak clearly, if they have similar voices that make it more challenging to understand who’s speaking, or if they have heavy accents, these factors, too, can make things more complex. Even if the audio is great but the topic is highly technical in nature, the transcriber will need to do some research, and it could take longer to complete the transcription.

Whether the transcription itself takes four or 10 hours, turnaround time, meaning when organizations actually receive the transcription, depends on additional factors. Freelance transcribers need to review and proofread their own work. Sometimes it means listening to the audio again, making corrections and editing. Since it is their own work, and they know the organization’s team, they are often very invested in getting the organization the highest quality transcription possible. But all of this takes time. Therefore, it is possible that the transcription itself will take four hours, but the turnaround time will take 48 hours. Of course, an individual transcriber may already be booked for days or weeks ahead of time, extending the turnaround time even further. If they’re solo business owners, they likely juggle many responsibilities in their professional and personal lives and aren’t always able to be instantly available for an urgent transcription project.

 

How Long Does it Take for Transcription Companies to Transcribe Audio?

Transcription time per audio hour isn’t necessarily different between individual transcribers and transcription companies. The transcription work itself needs to get done by a human being either way. That said, a transcription company, often being a larger business than that of a freelance transcriber, might have the budget for better tools, which might help reduce how long the process takes. In addition, since a transcription company often has a team, there’s more potential for quick availability and faster transcription turnaround time, as team members can review each other’s work and back each other up if someone gets sick or has an emergency.

That said, accuracy guarantees and turnaround times vary drastically between companies. The variation can be anything from one business day with accuracy not guaranteed, to six business days if the audio file is three hours or longer.

 

How Long Does it Take for Transcription Software to Transcribe Audio?

The average time to transcribe one hour of audio definitely decreases when transcribers use high quality transcription software. But here too, we’ve heard of the transcription turnaround time of four business days, so you’ll need to check with your providers on an individual basis to know what to expect.

 

Consider doing some research on the best tools, and ensuring your organization’s provider offers them. We’ve found that if transcription companies use automatic speech recognition (ASR) and artificial intelligence (AI), they can cut the turnaround time to one business day or less, even if they have two human beings review what the software had automatically transcribed. In addition, some transcription software products offer transcription in real-time, where the transcription time per audio hour is nonexistent because organizations get the transcription immediately and automatically.

 

Does Average Transcription Turnaround Time Impact Accuracy?

There is something to be said about taking the time to ensure accuracy. If transcription companies try to reduce transcription turnaround time without setting their team up for success, chances are quality will be compromised. As we said, if the audio quality is high, there are few speakers with distinctly different voices, there are no tech issues to overcome, and no obscure industry terms to look up, a great transcriber equipped with efficient transcription tools will likely be able to produce a high-quality transcription fast.

The key is to find a provider that combines high-quality human intelligence with high-quality artificial intelligence. When a transcription software is empowered with AI, it can learn industry-specific terms, obscure book titles and professional names, plus current events, among others, and then have human transcribers correct any mistakes if any are made at all. This process provides organizations with accurate transcription faster, plus the software gets smarter every time it’s used, as it can learn from any mistakes and correct them in future uses. When organizations combine smart technology with smart human beings, not only does the average transcription time get significantly reduced, but the final result is as accurate as if a seasoned professional dedicated days to make it happen.

 

Back To Top