NNA Corpus Video 2: Lab and Procedures

Author(s): Hannah Brennan, Bren Yaghmour, Domi Hannon, Jaxon Myers

Mentor(s): Vincent Chanethom, Linguistics; Harim Kwon, English; Giulia Masella Soldati, Haley A. todd, Graduate Assistants

Abstract

Ultrasound is a non-invasive method of looking inside the body to ascertain the tongue’s movements. Because we cannot see what is happening inside of a person’s mouth when they talk, studying speech is largely done by making judgements on what could be happening based on the acoustics and lip movements of the speaker. The Non-Native Articulatory Corpus will provide researchers all over the world access to speech ultrasound data, a commodity which is expensive and difficult to acquire. By creating an online database of audio and ultrasound recordings of native and non-native speakers of foreign languages, we will be able to compare and contrast the movements of the tongue when speakers attempt the same sounds. These comparisons can be used by linguists, speech therapists, and language learners to analyze and alter their own pronunciations. This database provides extensive data on a subject that is understudied due to the prevalence of anglocentrism in research and makes use of the diverse language population in and around George Mason University.

Video Transcript

The non native articulatory corpus allowed us undergraduate researchers to collect data from human subjects in our ultrasound lab. If you haven’t already, check out our video introducing the NNA corpus and its website. Here, Hannah and I will show you our lab and the equipment we’ve been using this summer. For this project we’re collecting two different types of data. First, we record acoustic data using this microphone and this little red box. However, the pieces I find most interesting are the ultrasound machine here and the probe or the transducer which is here. These two collect articulatory data which is how the tongue moves when you make speech sounds. This is the focus of our project as this is what makes NNA corpus so unique. These two recordings can communicate and sync with each other using this silver box here. Participants wear the stabilizing helmet with the ultrasound attached at the base underneath their chin. Using the ultrasound transmission gel, the probe emits waves that can map the location of their tongues in real time. When you come in as a participant, we’re going to put you in this helmet. We’ll first adjust the tightness on your head, and then secure it using front and back straps. This will be a little awkward to have on your head, but it’s not going to be uncomfortable. We’ll then adjust the height of this ultrasound arm, and it’s going to be close enough that we can see your tongue but you can still say anything that you need to. Then we’re going to adjust the ultrasound image. Alright, so this is Hannah’s tongue. Back here is the back of her tongue or the root, and over here is the tip of her tongue, see? This thing back here, this dark shadow, is her hyoid bone, which moves as she speaks. And then out at the front is her mandible or her jaw, which is this bone right here. I’m going to give her the floor to make some weird sounds now, so please enjoy! First I’ll do some vowels: we have [eee-yaaaa-uuuu] and then we say [tuh tuh tuh], [kuh kuh kuh], [key coo], [caw, gah]. [Coughs, yawns, whistles as if calling a dog] Most people think of ultrasound technology being used solely in the medical field. But it’s ability to record the movements of the tongue without needing to be inside of the mouth makes it useful in linguistics research as well. Not only will researchers be able to see the errors that non-native speakers are making, but we will also be able to explore the different motor capabilities related to speech production. The combination of acoustic and articulatory data allows researchers to compare what we already know about the acoustic findings with what is normally hidden inside the mouth of the speaker. For example there can be more than one way to articulate the same sound. In English, there are two different ways to articulate /r/. A bunched /r/ means that the tongue body raises while the tongue tip is lowered. And retroflex /r/ means that the tongue tip is raised. Bunched participant: Say red car Retroflex participant: red car Just as these two speakers sound the same, our corpus will also allow us to actually see and study atypical but acoustically normal articulations. Our team has another video which will describe the analysis of the ultrasound data we recorded, so check that out if you haven’t already. This has been our equipment and the procedure for our participant recording. If you think it would be fun to see your tongue as you speak, our website and social media will have updates for when we start recording your second language.

For more on this topic see:
NNA Corpus Video 1: Intro to Linguistic Corpus
NNA Corpus Video 3: Ultrasound Analysis

Leave a Reply