> I’ve felt this same way about all these government press conferences that have a picture-in-picture of someone doing sign language translation. I genuinely don’t understand the point apart from virtue signaling. Why not just use live transcription?
I understand why you might feel this way, so let me try to explain.
Sign languages are a completely different modality/type of language than written and spoken languages. For example, American Sign Language (ASL) is completely unrelated to written and spoken English, and conveying a message between languages is often complicated and requires someone to actively interpret -- it's more of an art than a science. So while Closed Captioning was a huge step forward for making things accessible to deaf and hard-of-hearing people, it's still a subpar method of communication because for deaf people, especially deaf immigrants, written English is often their 2nd/3rd/4th language. When you are communicating important information where comprehension is essential, you *need* an interpreter.
This is absolutely correct. ASL is so deeply different from English that in certain circumstances where it's a priority that a signer's understanding of the topic is clear there can be two interpreters at once. A hearing interpreter who interprets verbatim to a Deaf interpreter (a CDI, or certified Deaf interpreter) who then interprets to the Deaf recipient.
Public broadcasts should absolutely be interpreted to ensure that everyone has access to understanding them.
I encourage everyone to learn more about ASL and to learn a bit of ASL themselves. There are many amazing resources put out by Deaf people to help folks learn.
> A hearing interpreter who interprets verbatim to a Deaf interpreter (a CDI, or certified Deaf interpreter) who then interprets to the Deaf recipient.
This is a fantastic point. Learning about CDIs really helped me understand the chasm between written and spoken languages versus sign languages.
For example, writing a program that leverages NLP to provide real-time captioning doesn't necessarily make the content as accessible as you'd think.
This parent-commenter is correct, ASL is a completely separate language from spoken English. Many people who are raised with ASL as their first language prefer seeing ASL rather than reading English.
Modern ASL includes elements of "signed English" but it it's own language, the third most common in the US.
I am a hearing person, so take my understanding of this with a grain of salt. Here is a Deaf primary resource to describe the difference.
I understand why you might feel this way, so let me try to explain.
Sign languages are a completely different modality/type of language than written and spoken languages. For example, American Sign Language (ASL) is completely unrelated to written and spoken English, and conveying a message between languages is often complicated and requires someone to actively interpret -- it's more of an art than a science. So while Closed Captioning was a huge step forward for making things accessible to deaf and hard-of-hearing people, it's still a subpar method of communication because for deaf people, especially deaf immigrants, written English is often their 2nd/3rd/4th language. When you are communicating important information where comprehension is essential, you *need* an interpreter.