Parents who communicate with their infants help improve language skills and later abilities

0
1102

A new language-skills study that included infants later diagnosed with autism suggests that all children can benefit from exposure to more speech from their caregivers.

Dr. Meghan Swanson, assistant professor at The University of Texas at Dallas, is the corresponding author of the study, published online June 28 in Autism Research.

It is the first to extend research about the relationship between caregiver speech and infant language development from typically developing children to those with autism.

The findings could inform guidelines for earlier action in cases of developmental difficulties.

“You can diagnose autism at 24 months at the earliest; most people are diagnosed much later.

Early intervention, from birth to age 3, has shown to be effective at supporting development in various cohorts of children,” said Swanson, who joined the School of Behavioral and Brain Sciences in January as the director of the Infant Neurodevelopment & Language Research Lab, known as the Baby Brain Lab.

She said there has been a push to identify autism earlier or demonstrate that the same techniques that help most children develop language skills also benefit those eventually diagnosed with autism.

The study involved 96 babies, 60 of whom had an older sibling with autism.

Swanson said that this “baby-sibling” research design was necessary.

“How do you study autism in infancy when you can’t diagnose it until the kids are age 2 at least?” she asked.

“The answer relies on the fact that autism tends to run in families. These younger siblings have about a 20% chance of being diagnosed eventually with autism.”

Indeed, 14 children from the high-risk subset of 60 were diagnosed with autism at 24 months.

The study results directly tied the number of words an infant hears, as well as the conversational turns he or she takes, to the performance on the 24-month language evaluation — both for typical children and those with autism.

“One conclusion we’ve come to is that parents should be persistent in talking with their babies even if they aren’t getting responses,” Swanson said.

Swanson emphasized how important large, longitudinal studies — tracking the same individuals across an extended period — like this one are in her field.

“You have to follow the same children for years to learn anything conclusive about development,” she said. “You can’t simply shift from a group of 2-year-olds to a different group of 3-year-olds and so on.”

Correcting the misunderstanding of parents’ influence in autism has been a gradual fight against outdated conceptions, Swanson said.

“When parents receive an autism diagnosis for a child, some might wonder, ‘What could I have done differently?’” she said.

“There is no scientific backing for them to think in these terms.

But there is a dark history in autism where parents were wrongly blamed, which reinforced these thoughts.

To do research involving mothers as we have, you must approach that topic with sensitivity but also firmly reinforce that the logic that parenting style can cause autism is flawed.”

The children’s interactions with caregivers were recorded over two days — once at nine months and again at 15 months — via a LENA (Language Environment Analysis) audio recorder. The children’s language skills were then assessed at 24 months.

“The LENA software counts conversational turns anytime an adult vocalizes and the infant responds, or vice versa,” Swanson said.

“The definition is not related to the content of the speech, just that the conversation partner responds. We believe that responding to infants when they talk supports infant development, regardless of eventual autism diagnosis.”

The project was undertaken by the Infant Brain Imaging Study (IBIS) network, a consortium of eight universities in the United States and Canada funded by the National Institutes of Health as an Autism Center of Excellence. Before joining UT Dallas, Swanson was a postdoctoral fellow at the University of North Carolina at Chapel Hill, one of IBIS’ study sites.

The other study sites are Children’s Hospital of Philadelphia, Washington University in St. Louis, the University of Washington in Seattle and the University of Minnesota Twin Cities campus.

Dr. Joseph Piven, the IBIS network’s principal investigator, is the director of the Carolina Institute for Developmental Disabilities at UNC-Chapel Hill.

For parents, the results should highlight the long-term effect of initiating conversations from an early age, he said.

Risultati immagini per Parents who communicate with their infants

She said there has been a push to identify autism earlier or demonstrate that the same techniques that help most children develop language skills also benefit those eventually diagnosed with autism. The image is credited to UT Dallas/Chris Adam.

“Talking to your kids makes a big difference,” Piven said. “Any impact on early language skills will almost certainly have an impact on a wide range of later abilities in school-age children and significantly enhance their probability of success.”

Swanson said the most important takeaway from this work is that parents can make a significant difference in language development, even in children who are eventually diagnosed with autism.

“Parents can be amazing agents of change in their infants’ lives from as early as 9 months old,” she said. “If we teach parents how to provide their children with a rich communication environment, it helps support their children’s development.

I find that incredibly hopeful — the power that parents have to be these positive role models.”

In addition to UT Dallas and the IBIS study sites, researchers from Temple University, Perelman School of Medicine at the University of Pennsylvania, McGill University and the University of Alberta contributed to this study.

Funding: The Simons Foundation also supported the research.


Many studies have shown that young children’s learning of language material from video is very low compared with learning from human tutors, a pattern called the “video deficit.” (1)

For example, research has established infants’ ability to learn foreign language phonemes (the consonants and vowels that make up words) through social but not nonsocial contexts (2).

In Kuhl et al. (2), 9-mo-old infants were exposed to Mandarin Chinese in 12 25-min laboratory visits.

Each infant experienced one of three exposure styles: live social presentation, the same foreign speakers and material presented on video, or an audio recording of the same speakers and material.

A control group of infants experienced live language social presentation but heard only English. Phonemic learning, tested with behavioral and brain measures after completion of the second language (L2) exposure sessions, demonstrated that only infants exposed to live Mandarin speakers discriminated the foreign phonemes as well as native Mandarin-learning infants; no learning occurred when exposure occurred through video displays or audio recordings (2).

Other studies confirm that children’s language learning is better from live humans than from screens (35).

In one study, video clips from Sesame Beginnings presented novel verbs to 2.5- and 3-y-old children (5).

Half of the children saw the novel verbs presented entirely on video; the other half saw a 50–50 split of presentations on video and delivered by a live social partner.

Children were tested on their ability to extend the novel verb to a new actor performing the same action. Results showed that toddlers who interacted with an adult in addition to watching a video learned the novel verbs at a younger age than children who passively viewed the video, and that learning from video was not as robust as learning from live social interactions.

Recent evidence suggests that the screen itself does not impede children’s learning; rather, the problem is the lack of interactivity in traditional media. One study used video chats to ask if 24- to 30-mo-olds can learn language in a video context that incorporates social interactions (6).

Even though video chats offer a 2D screen, this technology differs from traditional video in several important ways. Video chats allow children and an adult to participate in a two-way exchange, thereby approximating live social interactions.

Adults are also able to be responsive to children and ask questions that are relevant to them. Although the speaker’s eye gaze is often distorted in video chats because of the placement of the camera relative to the screen, video chats preserve many of the qualities of social interactivity that help children learn (7).

In fact, when 24- to 30-mo-olds were exposed to novel verbs via video chat, children learned the new words just as well as from live social interactions.

oddlers showed no evidence of learning from noninteractive video. Myers et al. (8) recently demonstrated a similar phenomenon with 17- to 25-mo-olds.

cently demonstrated a similar phenomenon with 17- to 25-mo-olds.

These young toddlers experienced either a FaceTime conversation or a prerecorded video of the same speaker.

A week after exposure, children who interacted with an unknown adult via FaceTime recognized the adult and demonstrated word and pattern learning.

Thus, research provides evidence that children’s ability to learn language from screens can be improved by technology that facilitates social interactions (e.g., video chats) (68), by the content of media (e.g., reciprocal social interactions) (9), or with the context of screen media use (e.g., coviewing) (10).

This allows the field to move beyond the screen vs. live dichotomy and focus the discussion on the role of interactivity for children’s learning.

Historically, research on the effect of social interactivity in children’s media has always paired adults with children.

However, there is some evidence that infants may also treat peers as social partners.

For example, Hanna and Meltzoff (11) found that 14- and 18-mo-olds imitate actions demonstrated by same-aged peer models and even recall these actions after a delay of 2 d. More recent evidence even suggests a peer advantage, such that 14- and 18-mo-olds imitated complex action sequences better from 3-y-old models than from adult models (12).

Additional research indicates that children’s learning is enhanced in the mere presence of others.

That is, even when a peer is not serving as a teacher or model, simply being in the presence of a social other may facilitate learning.

Studies show that infants perform tasks differently—and better—when they are in the presence of another person (13), and research with school-aged children suggests that learning is improved by the mere presence of another person, or even the illusion that another person is present (14).

These findings indicate that explorations of the effects of having a social partner on children’s learning from media might broaden our understanding of the roles played by social peers, even if peers are not in the position of a teacher or model.

The current study investigates the effect of the presence of peers on infant foreign-language phonetic learning from video. We utilized the same Mandarin-language videos used previously in passive learning experiments (2), but made the procedure an active learning environment by allowing infants to control the presentation of videos using a touch screen.

Each touch of the screen initiated a 20-s clip of the Mandarin speaker talking about toys and books. Given that previous research on children’s learning from peers or in the presence of peers presents children with a task (1114), the touchscreen paradigm gives infants an active learning task that nevertheless presents the same information as previous research (2).

We manipulate the presence of peers by randomly assigning infants to an individual-learning condition or a paired-learning condition. Infants in the individual condition participated in all study sessions by themselves, whereas infants in the paired condition always participated with another infant (Fig. 1).

An external file that holds a picture, illustration, etc.
Object name is pnas.1611621115fig01.jpg
Fig. 1.
Examples of the individual- (A) and paired- (B) exposure sessions.

To measure infant’s foreign-language sound discrimination, we employ a behavioral measure, “conditioned head turn,” as well as a brain measure, event related potentials (ERPs). Kuhl et al. (2) reported results based on a conditioned head turn paradigm, but ERPs are also commonly used to assess infants’ ability to discriminate the sounds of language (1517).

Specifically, previous ERP investigations have shown that adults (1819) as well as infants (2023) exhibit the characteristic mismatch negativity (MMN), a negative-polarity waveform that occurs about 250–350 ms after the presentation of the deviant sound, indicating neural discrimination of the change from one phonetic unit to the other.

Importantly, the MMN is elicited in adults and 10- to 12-mo-old infants when listening to sounds of their native language, and it is reduced or absent when they listen to speech sounds that do not represent phonemic categories in their native language (2426). Thus, we employ both measures of speech discrimination in the present study.

We hypothesize that the mere presence of peers in the present investigation will support children’s ability to discriminate the foreign-language sounds.

During the exposure visits, the effect of peers will be evident in children’s social behavior, and we expect that social cues, like vocalizations and eye gaze, indicators of early attention (2728) and communication skills (2931), will emerge as related to infant phonemic learning.

With regard to the measures of sound discrimination, behavioral evidence of discrimination will be evidenced by performance greater than chance in the conditioned head turn paradigm and through the presence of the MMN in the ERP test. Infant research has shown that attention plays a role in the generation of the MMN. Specifically, auditory change detection can occur with high or low attentional demands that are mediated by language experience (1517253234), discriminability of the signals (3537), and maturational factors (3842).

The MMN associated with high attentional demands in the perception of speech sounds exhibits a positive polarity (positive-MMR or pMMR) and is considered a less-mature MMN response.

The MMN associated with low attentional demands exhibits a negative polarity (i.e., MMN) and, because it is shown by adults listening to native-language sounds, is considered the more mature MMN. In the present investigation, we postulate that infants in the single-infant condition will show pMMRs because high attentional demands are required by a difficult speech discrimination task, such as nonnative speech discrimination (353643).

On the other hand, we postulate that infants in the paired-infant condition will process the Chinese phonetic distinction with less effort due to social arousal, and hence we expect the brain response to have a negative polarity (i.e., MMN).


Source:
UT Dallas
Media Contacts:
Stephen Fontenot – UT Dallas
Image Source:
The image is credited to UT Dallas/Chris Adam.

Original Research: Closed access
“Early language exposure supports later language skills in infants with and without autism”. Meghan Swanson et al.
Autism Research 10.1002/aur.2163

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.