The Caption Center at WGBH invents broadcast captioning, providing open captions on TV for the first time. Closed captions, which viewers can choose to turn on or off, came along around 1980.
Courtesy of WGBH
Tanzina Vega: This is The Takeaway. I'm Tanzina Vega. The Americans With Disabilities Act just turned 30, and all week we've been bringing you stories about the lasting impact of the ADA, and some of the issues that people with disabilities continue to face in this country every day. One problem that the ADA helped eliminate was media accessibility, and over the years the ADA has helped to ensure that people who need access to services like closed captioning or audio description have them. The Takeaway's partner station WGBH in Boston actually helped to create and implement both closed captioning and description services.
The first show ever on television to be captioned was Julia Child's The French Chef on WGBH in 1972. Nearly 20 years later, the station debuted audio description with an episode of American Play House in 1990. Ira Miller is a production manager for the Media Access Group at WGBH and Tim Alves is a supervisor of caption operations at WGBH and they join me to talk about the history of this technology and how it's used today.
Tim Alves: If you're watching a sporting event or live news, then there is a stenocaptioner transcribing that. Basically they're taking the audio feed from the live event and running it through their stenography machine. It's basically like a court reporter except with slightly different software. They can type up to 250 to 300 words a minute at pretty much within fractions of a second from the audio being heard. It's then transmitted out on air.
Tanzina: Ira, let's talk about something else that's called audio descriptive services. What is it and how is it developed?
Ira Miller: Audio description as it's commonly referred to today, was really an aha moment back in 1985. We had a technologist here at WGBH who was aware of a theater organization in the Washington DC area, the Washingtoneer who was providing this audio description for blind audiences in theater, and in 1985, technology advanced so that your television set at home, it was capable now broadcasting stereo, two channels. Barry had an aha moment. He said, "Wow. I think we can combine that capability to serve the blind community with television programming by creating an audio description track and broadcasting that on what we commonly refer to as the secondary audio program or the SAP channel. That's where the idea came from. The first tests really were American Playhouse. BBS backed that a hundred percent and from there, it just blossomed because the reaction to the blind community was very supportive and we've just gone from there.
Tanzina: Ira, we're going to play a clip from the Sherlock Holmes TV show so that people have an idea of exactly what it sounds like if they've never heard it before, this is an example of audio description. Let's take a listen.
Audio Describer: In the Baker street flat, Watson glances at the phone. Frowning, he reads a text message and carries the phone to Sherlock who is still gazing into the microscope. Test tubes and vials cover the table.
Sherlock: Not now. I'm busy.
Watson: Shut up.
Sherlock: Not now.
Watson: He's back.
Audio Describer: Sherlock reads the text, 'Come and play. Tower Hill, Jim Moriarty.'
Tanzina: Ira, how do you make sure that the descriptions that people are hearing make sense to a wide range of people? Are there certain standardized ways that you describe things to people?
Ira: There's a very unique training that people who wanted to become audio describers undertake. We consider it description of the key visuals. What we find is pauses in a television program or a movie where we can come in and describe what's on screen. We envision it like you have a friend sitting next to you, if you can't see, whispering in your ear and telling you what's happening, either on television or on the movie, but we have the luxury of knowing when the pauses occur.
The writers are unique because they have to have a real command of language and grammar and be very precise because some of these pauses may be as short as one second. Finding the time to describe and making the decision of what's on screen so that it makes sense to the viewers who can't see what's on screen is really invaluable.
Tanzina: What's being done to ensure that more people have access to these types of services in general, particularly when it comes to, if we ever get back to a point where we'll be in public spaces like theaters, Tim.
Tim: Well, there are mo-pics for say, movie theaters, which already exists. We've been doing for years, in which case it's a personalized caption device that you can have at your seat. You can sort of put it on your seat and so you can see the captions come up as you look at the movie. I think with the recent laws, 21st century communications and video accessibility act, I think everything that's going on. Say television meant for streaming only is captioned.
When you're sitting at home, as many of us are now anytime you turn on Netflix or any streaming service, you're going to have captions because that was mandated back in 2010. I think they're always cognizant of what's next in terms of how media is presented to people and how are we going to make that accessible to people. I think that's an important thing that the communities who are affected are always advocating for themselves.
Tanzina: Ira, we're talking about this technology as the Americans With Disability Act turns 30. Are there any legal requirements to have audio description services available similar to what Tim just described regarding captions?
Ira: Yes. That was the breakthrough. It's a long history of advocacy by many people in the blind community and some supporters like WGBH, the National Federation for the Blind, the American Council of the Blind. All these people lobbied beginning in early 2000s to have the same availability, assessability that captioning had. Captioning was put into law as Tim mentioned. There was no such law for our description because there wasn't description. It was a later technology breakthrough, but when the FCC had hearings back in the early 2000s, they passed the rule that said there had to be audio description provided for television and motion pictures.
It was a long road and a lot of advocacy, but finally in 2010, the law was signed. It was finally put into action in 2015 and we've come a long way. There's an abundance of description now, not as much as captioning and it's still being phased in and there are specific rules about how many hours each provider must provide as far as audio description. It's a phased in approach, so there will be more coming, but I think we're going to see a lot more in the years to come because it's been demonstrated and it's been accepted as something that should be provided to our audiences.
Tanzina: Just to round out the segment here asking both of you, we were talking about technology and there's been so much new technology in the past decade alone. So many of us are now receiving and consuming media through our cell phones for example. Has emerging media done enough to make sure that captioning and descriptions are available on these devices, Ira?
Ira: Yes. We have a division here at WGBH, which we call our R and D division, which is the National Center for Assessability and Media. They've been there every step of the way since '93 in their creation to make sure that technology, and they stay abreast of emerging technologies, are such that they're not barriers and how to overcome any technology barriers to provide captions and description. It's imperative that these technology advances do not make it so that there are impediments for people to receive these very vital services.
Tanzina: Tim, what about you? Are emerging technologies doing enough?
Tim: I think there's going to be some lag, which we're encountering in some ways now for-- we have live streaming events, some platforms do have integrated caption abilities and some have sort of antiquated caption integration. It's a process where some are not quite there yet, and I think they're working on it. It just does take some time and it takes some effort, but it's not always as instantaneous as we'd like.
Tanzina: Tim Alves is the supervisor of caption operations at the Media Access Group at WGBH, and Ira Miller is the production manager at the Media Access Group at WGBH. Thanks to you both.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.