Friday 1st December 2023
Parents should speak to their babies using sing-song speech, like nursery rhymes, as soon as possible, say researchers. That’s because babies learn languages from rhythmic information, not phonetic information, in their first months.
Phonetic information – the smallest sound elements of speech, typically represented by the alphabet – is considered by many linguists to be the foundation of language. Infants are thought to learn these small sound elements and add them together to make words. But a new study suggests that phonetic information is learnt too late and slowly for this to be the case.
Instead, rhythmic speech helps babies learn language by emphasising the boundaries of individual words and is effective even in the first months of life.
Researchers from Trinity College Dublin and the University of Cambridge investigated babies’ ability to process phonetic information during their first year.
Their study, published today in the journal Nature Communications, found that phonetic information wasn’t successfully encoded until seven months old, and was still sparse at 11 months old when babies began to say their first words.
“Our research shows that the individual sounds of speech are not processed reliably until around seven months, even though most infants can recognise familiar words like ‘bottle’ by this point,” said Cambridge neuroscientist, Professor Usha Goswami. “From then individual speech sounds are still added in very slowly – too slowly to form the basis of language.”
The researchers recorded patterns of electrical brain activity in 50 infants at four, seven and eleven months old as they watched a video of a primary school teacher singing 18 nursery rhymes to an infant. Low frequency bands of brainwaves were fed through a special algorithm designed by Trinity’s Professor Giovanni Di Liberto, first author of the research article. This apporach produced a ‘read out’ of the phonological information that was being encoded.
The researchers found that phonetic encoding in babies emerged gradually over the first year of life, beginning with labial sounds (e.g. d for “daddy”) and nasal sounds (e.g. m for “mummy”), with the ‘read out’ progressively looking more like that of adults. Previously, studies have relied on comparing the responses to nonsense syllables, like “bif” and “bof” instead.
Professor Giovanni Di Liberto, a cognitive and computer scientist in Trinity’s School of Computer Science and Statistics, and a researcher at the ADAPT Centre, said: “This is the first evidence we have of how the brain encoding of phonetic information in continuous speech changes over time.”
The authors explained that rhythm is a universal aspect of every language spoken across the globe, so perhaps it isn’t surprising that the stress or emphasis we place on different words and the rise and fall of tone we use to convey emotion while speaking is the key to how babies learn language.
“We believe parents should therefore talk and sing to their babies as much as possible or read nursery rhymes because this will help them learn,” added Prof. Di Liberto.
The authors believe there are many exciting future directions for building on these findings.
Prof. Di Liberto added: “One possibility would be to study the development of speech processing beyond phonemes, for example by investigating how abstract concepts are formed. Another important direction that I think has great potential involves studying more realistic scenarios, such as dialogues, which could help us better understand deficits impacting speech communication in people with autism spectrum disorder or social anxiety, for example.”
The research was funded by the European Research Council under the European Union’s Horizon 2020 research and innovation programme and by Science Foundation Ireland. A PDF copy of the research article is available on request.