All over the world, parents sing songs and recite rhymes to their young children. Researchers have known for some time that this has a stimulating or calming effect on babies, but it turns out that babies are also sensitive to the language patterns in these rhymes.
This is the conclusion reached by linguist Laura Hahn in her Radboud University Ph.D. thesis, which she is due to defend on 1 February.
The rhymes that many babies hear from their parents contain all kinds of poetic structures, such as rhyme, rhythm, and verse lines. Hahn studied to what extent babies recognize these language patterns. At the Radboud University Baby & Child Research Center, she had babies listen to children’s songs and rhymes.
In one of her experiments, Hahn used the HPP (Head Turn Preference Paradigm) method to investigate to what extent babies are aware of phrases, i.e. verse lines, in songs. The children were seated on their parents’ lap and heard a word sequence from songs coming from the left or the right.
They kept their head turned for a longer time towards songs that contained a word sequence as a full phrase. When the songs played only contained the word sequence as individual words, the babies looked away more quickly.
“This suggests that babies that are able to recognize phrases in songs,” says Hahn.
Hahn also investigated whether babies can perceive rhyme in songs and spoken verses by having them listen to rhyming and non-rhyming and rhythmic and non-rhythmic verses.
Both in terms of listening time and brain activity, she observed differences between rhyming and non-rhyming stimuli, and these were partially influenced by rhythm, although not all observed differences were significant.
“This requires follow-up research,” says Hahn, who did discover a link between sensitivity to rhyme and rhythm in songs and verses, and the size of a baby’s vocabulary. “So singing and reciting rhymes can have a positive effect on babies’ language development.”
It is therefore not a bad idea to become more aware of the function of rhymes and songs. “We already know that songs and rhymes have socio-emotional benefits. It stimulates children or on the contrary calms them down. Now we also have a first impression of how babies become acquainted with language patterns through language play such as in songs and rhymes. This can come in useful, for example in day-care centers, for children with language development disorders or multilingual children,” says Hahn.
All children can benefit from learning important language patterns in the attractive context of a song or rhyme. “But we do need more comparable research before we can draw hard conclusions about the effects of songs and rhymes on the linguistic development of babies,” emphasizes Hahn.
At the same time, she does see practical applications. “If we can introduce people to these patterns in an interesting way, and in this way help them to learn a language better, this could be an argument for devoting more attention to music and poetry in our interaction with young children. At home, but also in school.”
It is well known that the adult brain is specialized in its response to native language (Perani et al., 1996; Dehaene et al., 1997). Recent evidence has suggested that the human brain is tuned to language from the earliest stages of development. Only a few days after birth, neonates respond differently to language than to non-linguistic sounds. Very young infants demonstrate a preference for listening to speech over non-speech (Vouloumanos and Werker, 2007), and are capable of discriminating languages from different rhythmical classes (Mehler et al., 1988; Nazzi et al., 1998; Ramus et al., 2000).
However, what is unknown from past research is the extent to which early prenatal experience with language may play a role in determining the organization of neonates’ neural tuning for language. In particular, no one has yet investigated whether the experience that neonates have with the native language while in utero influences the pattern and location of brain activity to familiar versus unfamiliar language. In the current study, we use near-infrared spectroscopy (NIRS) to take the first steps in exploring this question.
Research to date examining the neonate brain response to language versus non-language has shown that brain responses to familiar language are both stronger and more specialized when compared to the response to non-language (Dehaene-Lambertz et al., 2002, 2010; Pena et al., 2003). Using behavioral methods, a left hemisphere advantage for language has been inferred through dichotic listening to individual syllables as measured by high-amplitude sucking in newborns (Bertoncini et al., 1989), as well as through mouth asymmetries during babbling in 5 to 12-month-olds (Holowka and Petitto, 2002).
In neuroimaging research, optical imaging studies with newborns have shown a greater left hemisphere response to audio recordings of forward versus backward speech (Pena et al., 2003), as well as evidence that the left hemisphere plays an important role in processing repetition structures in language (e.g., ABB versus ABC syllable sequences; Gervain et al., 2008). Similarly, fMRI studies with infants 2–3 months of age indicate differential responses in the left hemisphere to continuous forward versus backward speech (Dehaene-Lambertz et al., 2002), and to speech versus music (Dehaene-Lambertz et al., 2010).
These functional studies are supported by structural MRI analyses indicating asymmetries at birth in the left hemisphere language areas of the brain (Dubois et al., 2010). All of the above studies, however, have focused on young infants’ neural response to familiar language, leaving open the question of how much responses may have been driven by language experience.
At birth, neonates are experiencing extra-uterine language for the first time. However, in utero they have had the opportunity to learn about at least some of the properties of language. The peripheral auditory system is mature by 26 weeks gestation (Eisenberg, 1976), and the properties of the womb are such that the majority of low-frequency sounds (less than 300 Hz) are transmitted to the fetal inner ear (Gerhardt et al., 1992). The low-frequency components of language that are transmitted through the uterus include pitch, some aspects of rhythm, and some phonetic information (Querleu et al., 1988; Lecaneut and Granier-Deferre, 1993). Moreover, the fetus has access to the mother’s speech via bone conduction (Petitjean, 1989).
There is evidence that the fetus can hear and remember language sounds even before birth. Fetuses respond to and discriminate speech sounds (Lecanuet et al., 1987; Zimmer et al., 1993; Kisilevsky et al., 2003). Moreover, newborn infants show a preference for their mother’s voice at birth (DeCasper and Fifer, 1980) and show behavioral recognition of language samples of children’s stories heard only during the pregnancy (DeCasper and Spence, 1986). Finally, and of particular interest to our work, newborn infants born to monolingual mothers prefer to listen to their native language over an unfamiliar language from a different rhythmical class (Mehler et al., 1988; Moon et al., 1993). These studies suggest that infants may have learned about the properties of the native language while still in the womb.
In a recent extension of the work showing a preference for the native language at birth, Byers-Heinlein et al. (2010) investigated how prenatal bilingual experience influences language preference at birth. Infants from 0 to 5 days of age born to either monolingual English or bilingual English–Tagalog mothers were tested in a high-amplitude sucking procedure. Infants were played sentences in both English (a stress-timed language) and Tagalog (a Filipino language that is syllable-timed).
Sentences from both languages were low-pass filtered (to a 400-Hz cut-off), to maintain the rhythmical information of each language while eliminating most surface segmental cues that may be different across languages. Byers-Heinlein et al. (2010) found that while all infants could discriminate English and Tagalog, the monolingual-exposed infants showed a preference for only English and the bilingual-exposed infants had a similar preference for both English and Tagalog. These results provide strong evidence that language preference at birth is influenced by the language heard in utero, even when infants have had prenatal experience with multiple languages. The neural correlates of this behavioral preference for familiar language(s) at birth are however, unknown.
A recent neuroimaging study of infants’ processing of speech and non-speech has provided some support for the hypothesis that language experience may impact early neural specialization for processing some aspects of language (Minagawa-Kawai et al., 2011). Using NIRS, 4-month-old Japanese infants’ brain response was assessed while listening to a familiar language (Japanese), to an unfamiliar language (English), and to different non-speech sounds (emotional voices, monkey calls, and scrambled speech).
Greater left hemisphere activation was reported for both familiar and unfamiliar language when compared to the non-speech conditions. Critically, activation was also significantly greater to the familiar language when compared to the unfamiliar language. This latter finding implies that by 4 months of age, the young brain responds differently to familiar versus unfamiliar language and is thus influenced by language experience.
However, the infants studied by Minagawa-Kawai et al. (2011) were 4 months of age – meaning that these infants have dramatically more experience with their native language than newborn infants. It is unknown whether infants with only a few hours of post-natal experience will show a similar difference in neural activation to a familiar versus an unfamiliar language.
In contrast to the above studies demonstrating the impact of language experience on newborn infants’ language processing, other areas of research have uncovered aspects of language perception that appear unaffected by specific language experience early in development. For example, neonates’ rhythm-based language discrimination has been shown to be based on language-general abilities. Phonologists have traditionally classified the world’s languages into three main rhythmic categories: stress-timed (e.g., English, Dutch), syllable-timed (e.g., Spanish, French), and mora-timed (e.g., Japanese).
This distinction is critically important to language learning as rhythmicity is associated with word order in a language (Nespor et al., 2008), rendering it one of the most potentially informative perceptual cues for bootstrapping language acquisition. Recent cross-linguistic investigations have more finely quantified the distinction between rhythmical classes, finding that languages fall into rhythmical class on the basis of two parameters: percent vowel duration within a sequence and the standard deviation of the duration of consonantal intervals (Ramus et al., 1999; see also Grabe and Low, 2002 for a different measurement scheme).
In a long series of studies, it has been demonstrated that young infants are able to discriminate languages from different rhythmical classes (Mehler et al., 1988; Nazzi et al., 1998; Ramus et al., 2000). This ability does not depend on familiarity with one or both of the languages being tested. Infants with prenatal experience with a single language can discriminate the native language from a rhythmically dissimilar unfamiliar language (Mehler et al., 1988), as well as discriminate two unfamiliar rhythmically different languages (Nazzi et al., 1998).
Further, infants with prenatal bilingual exposure are able to discriminate their two native languages when those languages are from different rhythmical classes, even though both languages are familiar (Byers-Heinlein et al., 2010). These findings show that rhythm-based language discrimination in newborns is not based on experience with the native language, but instead on initial universal biases. It therefore may be the case that the early neural response to language in neonates also reflects similar language-universal processing.
reference link: https://www.ncbi.nlm.nih.gov/labs/pmc/articles/PMC3177294/
Source: Radboud University