Up Next

The Role of Transcription and Captioning in Blended Learning

Back

Higher education today is not what it used to be. Once upon a time, taking copious notes as the professor droned on and memorizing textbook passages were synonymous with the college learning experience. However, over the past two decades, online education and multimedia content have supplanted outdated methods, becoming mainstays of the educational landscape.

Thanks to the surge of digital resources and online learning, there has been a definite sea change in higher education, resulting in new and enhanced learning models. The Verbit Team attended the Panopto User Conference to demonstrate how both solutions work hand in hand to enable these superior learning environments and increase student engagement through interactive and personalized elements.  

What is Blended Learning?

As part of this new paradigm, blended learning, the practice of using both online and in-person learning experiences, has become a commonplace feature of many institutions. This model involves the following:

  • Combining instructional modalities (i.e. visual, auditory, kinesthetic, and tactile)
  • Combining instructional methods
  • Combining face-to-face and online instruction

For successful blended learning, the in-person and online elements must function in tandem to create deeper educational experiences, while giving the student some control over their individualized paths and the pace at which they want to learn. An emphasis on self-driven and interactive learning places technology at the center of this learning model.

Evolving Technology

In recent years, the educational technology landscape has expanded to include learning management systems, video hosting platforms, in-class response systems, platforms, tablets, smartphones, analytics tools and more. However, the most common features of blended learning are audio and video materials.

Transcription and Captioning for Accessibility

While blended learning has led to proven results in student success, it is not without its challenges, with accessibility often being cited as a key hurdle that institutions must overcome. To address these challenges, all audio elements must be accompanied by transcripts and all video must include synchronized captions. These are key considerations for course design that benefit all students, including those with auditory disabilities, students who prefer to learn by reading and those who are not native speakers of the language of instruction.

Verbit’s AI-enabled academic transcription and captioning solution is tailored to the education industry, leveraging customized speech-to-text technology that is trained with course-specific content for higher accuracy and quicker speed. Seamless integration with the Panopto video hosting and streaming platform means that all content, regardless of form, becomes instantly accessible for all students. The advanced platform levels the academic playing field so that all students have an equal shot at success.   

 

Blended education continues its rise as a fast-growing and highly effective program to enhance student engagement and boost grades. Technology tools that work together, including video streaming platforms and AI-enabled transcription and captioning solutions, possess the  potential to unlock new levels of academic success for all students.

Up Next

The Difference Between Subtitles, Closed Captions and SDH

While many people believe that all video captioning is created equal, there are key distinctions between each kind that make them ideally suited for different content types and audiences. Let’s explore the main differences between the three most common options: subtitles, closed captions, and Subtitles for the Deaf and Hard of Hearing (SDH).

 

What are Subtitles?

Subtitles are written transcriptions that are synchronized to media files so that they play at the same time as the spoken word. They can either be embedded in the file itself, or they can be turned on or off at the user’s discretion. Most importantly, subtitles are designed for hearing users, as they only cover spoken text. They do not include sound effects or other audio elements.

There are several benefits to subtitles, including:

  • Clarifying heavily accented or otherwise inaudible speech
  • Translating foreign speech
  • Allowing listeners or viewers to engage with media in noise prohibitive environments

 

The Difference Between Subtitles and SDH

As mentioned above, subtitles are meant for hearing listeners and only transcribe spoken word. SDH, on the other hand, are designed for those who are deaf, Deaf, or hard of hearing.

  • Deaf with a lowercase “d” refers to those who have no ability to hear but can communicate orally.
  • Deaf with an uppercase “D” refers to members of the Deaf community, who choose to communicate almost solely through Sign Language.
  • Hard of hearing refers to those with any level of hearing impairment that compromises their ability to process sound in some way. This category includes individuals with hearing aids.

SDH provide a richer experience for these media consumers when compared to subtitles. This is accomplished by adding additional information, such as speaker tags, sound effects and other elements outside of speech.

For example, SDH subtitles will indicate audio elements such as music, coughing or laughter audience laughter. Similar to plain subtitles, SDH also run simultaneously with the audio or video file, syncing the transcript with the action/speech.

 

Is SDH the Same as Closed Captioning?

Closed captions are required by law on all public broadcasts, as per FCC regulations. They can be found on most televisions and are usually what pop up if you choose the subtitle option on a traditional TV. Although geared toward the same audience and similar in content, there are a number of differences between closed captions and SDH.

  • You’ll find SDH on many more media types, such as streaming internet videos and Blu Ray DVDs. This is because closed captioning is not supported by High Definition Media Interface (HDMI), while regular subtitles and SDH are.
  • Closed captions are typically formatted as white text on a black background that can be positioned anywhere on the screen. In contrast, SDH is usually found on the bottom third of the screen and can vary in color.
  • SDH will come with the option to turn the subtitles on or off, and often can be manipulated to be larger or smaller in font. Closed captions, on the other hand, rarely include these kinds of options.
  • SDH and closed captions are encoded differently. While closed captions are encoded as a stream of commands, control codes, and text, SDH are typically encoded as bitmap images or a series of tiny dots or pixels.

Aside from individuals with auditory deficits, subtitles offer many benefits to a wide range of people by:

  • Improving comprehension for ESL speakers
  • Helping viewers with attention deficit or cognitive difficulties
  • Helping viewers understand speakers with accents or speech impediments

 

Impacting Accessibility Through SDH

The biggest benefit to including subtitles lies in increasing accessibility to multimedia content. SDH allows viewers who cannot access the auditory component of a media file to still enjoy the media in the fullest way possible, without missing out on the supplementary sounds that add to the overall viewing experience. SDH also gives audiences with hearing impairments the closest thing to an equal experience, which is important not only for disability rights but also for information acquisition.

 

While SDH undoubtedly enriches entertainment value, it also provides a more level playing field when it comes to educational, work and informational resources. Regardless of the content, SDH enables greater inclusion for a portion of the population that would otherwise be cut off from many forms of media.

Back To Top