Learning a second language is easier for some adults than others, and innate differences in how the various parts of the brain "talk" to one another may help explain why, according to a study published January 20 in the Journal of Neuroscience.
It takes just one-tenth of a second for our brains to begin to recognize emotions conveyed by vocalizations, according to researchers from McGill. It doesn’t matter whether the non-verbal sounds are growls of anger, the laughter of happiness or cries of sadness. More importantly, the researchers have also discovered that we pay more attention when an emotion (such as happiness, sadness or anger) is expressed through vocalizations than we do when the same emotion is expressed in speech.
Teenagers are not solely causing language change, according to Kansas State University research. Language changes occur throughout a lifetime and not just during the teenage years.
A new University of Iowa study finds babies make more speech-like sounds during reading than when playing with puppets or toys—and mothers are more responsive to these types of sounds while reading to their child than during the other activities.
Northwestern University's Nina Kraus has pioneered a way to measure how the brain makes sense of sound. Her findings have suggest that the brain’s ability to process sound is influenced by everything from playing music and learning a new language to aging, language disorders and hearing loss.
Jami Fisher, a lecturer in the University of Pennsylvania's Department of Linguistics, has a long history with American Sign Language. Both of her parents and her brother are deaf, she's Penn's ASL Program coordinator and now, with Meredith Tamminga, an assistant professor in Linguistics and director of the University's Language Variation and Cognition Lab, she's working on a project to document what they're calling the Philadelphia accent of this language.
You may believe that you have forgotten the Chinese you spoke as a child, but your brain hasn’t. Moreover, that “forgotten” first language may well influence what goes on in your brain when you speak English or French today.
In a paper published today in Nature Communications, researchers from McGill University and the Montreal Neurological Institute describe their discovery that even brief, early exposure to a language influences how the brain processes sounds from a second language later in life. Even when the first language learned is no longer spoken.
A new study done by University of Texas at Dallas researchers indicates that watching 3-D images of tongue movements can help individuals learn speech sounds. Researchers say the findings could be especially helpful for stroke patients seeking to improve their speech articulation.
Researchers have examined the relationship between the sound structures of a worldwide sample of human languages and climatic and ecological factors including temperature, precipitation, vegetation and geomorphology. The results, to be presented at ASA’s 2015 Fall Meeting, Nov. 2-6, show a correlation between ecological factors and the ratio of sonorant segments to obstruent segments in the examined languages. This supports the hypothesis that acoustic adaptation to the environment plays a role in the evolution of human languages.
After a debate that has lasted more than 130 years, researchers at Georgetown University Medical Center have found that loss of speech from a stroke in the left hemisphere of the brain can be recovered on the back, right side of the brain. This contradicts recent notions that the right hemisphere interferes with recovery.
Research published earlier this year claiming chimpanzees can learn each others’ language is not supported, a team of scientists concludes after reviewing the study.
In a new study from the University of Montreal, infants remained calm twice as long when listening to a song, which they didn’t even know, as they did when listening to speech.
University of Missouri research shows that babies’ repetitive babbles, such as "baba" or "dada," primarily are motivated by infants’ ability to hear themselves. Infants with profound hearing loss who received cochlear implants to improve their hearing soon babbled as often as their hearing peers, allowing them to catch up developmentally.
According to the African Stuttering Research Center, there is just one therapist for every 37,483 people who stutter in Africa. Florida Atlantic University is the first to provide free tele-therapy for patients who stutter in Africa.
Would a color by any other name be thought of in the same way, regardless of the language used to describe it? According to new research, the answer is yes.
An automated speech analysis program correctly differentiated between at-risk young people who developed psychosis over a two-and-a-half year period and those who did not. In a proof-of-principle study, researchers at Columbia University Medical Center, New York State Psychiatric Institute, and the IBM T. J. Watson Research Center found that the computerized analysis provided a more accurate classification than clinical ratings.
Bilingual children pose unique challenges for clinicians, and, until recently, there was little research on young bilinguals to guide clinical practice. A researcher at Florida Atlantic University provides important insight for clinicians.
When an athlete tries to breathe deep and struggles to get air, their performance suffers and stress takes over. Such a common symptom is easily misdiagnosed, but could signal a physical issue that many sports health care professionals may be unaware of. Luckily, an unlikely pair of medical professionals at Ithaca College are teaming up to help athletes recover from this troublesome condition.
A new study finds that American political speech has become more polarized across party lines over time, with a clear trend break around 1980, and that current levels are unprecedented.
People with a common form of hearing loss not helped by hearing aids achieved significant and sometimes profound improvements in their hearing and understanding of speech with hybrid cochlear implant devices, according to a new multicenter study led by specialists at NYU Langone Medical Center.
Facial motion capture – the same technology used to develop realistic computer graphics in video games and movies – has been used to identify differences between children with childhood apraxia of speech and those with other types of speech disorders, finds a new study by NYU’s Steinhardt School of Culture, Education, and Human Development.
Speech, emitted or received, produces an electrical activity in neurons that neuroscientists measure in the form of «cortical oscillations». To understand speech, as for other cognitive or sensory processes, the brain breaks down the information it receives to integrate it and give it a coherent meaning. But researchers could not confirm whether oscillations were signs of neuronal activity, or whether these oscillations played an active role in speech processing. Professor Anne-Lise Giraud and her team at the Faculty of Medicine of the University of Geneva (UNIGE) reached such conclusions after having created a computerized model of neuronal microcircuits, which highlights the crucial role of neuronal oscillations to decode spoken language, independently of speakers’ pace or accent.
Some children with autism should undergo ongoing screenings for apraxia, a rare neurological speech disorder, because the two conditions often go hand-in-hand, according to Penn State College of Medicine researchers.
Psychoacoustics identifies five basic types of emotional speech: angry, fearful, happy, sad and neutral. In order to fully understand what’s happening with speech perception, a research team at the University of Texas at Austin studied how depressed individuals perceive these different kinds of emotional speech in multi-tonal environments. They will present their findings at the 169th ASA meeting, held this week in Pittsburgh.
The University of Wisconsin-Milwaukee gives students the opportunity to study the Anishinaabe language spoken by the Ojibwe, Potawatomi and Odawa tribes.
“The growing diversity of American households is causing parents to debate on the benefits and detriments of raising their children to be bilingual” says Megan Riordan, speech-language pathologist at Loyola University Health System. “Many respectable medical professionals often suggest that parents refrain from speaking their native language to avoid confusing their child.” Common questions asked by bilingual parents and expert answers.
As social creatures, we tend to mimic each other’s posture, laughter, and other behaviors, including how we speak. Now a new study shows that people with similar views tend to more closely mirror, or align, each other’s speech patterns. In addition, people who are better at compromising align more closely.
“During the preschool period, children see and interact with a variety of print at home, in the community and at daycare or school,” says Kaitlin Vogtner Trainor, speech language-pathologist at Loyola University Health System. “This exposure to print builds phonological awareness skills, the recognition that words are made up of separate speech sounds, which leads to stronger reading and writing skills later in life.”
“Sometimes baby talk is associated with nonsense words and sounds and even distorts sounds of words, providing inaccurate models of the infants and developing child, this is not encouraged,” says Kathleen Czuba, speech language therapist, Loyola University Health System. “Research in the field of child development and speech and language acquisition instead recommends the use of ‘parentese.’ This type of speech has been shown to positively support the development of speech and language.”
“Being aware of the benchmarks of development can help caregivers and parents make sure children in their care are progressing appropriately,” says Kaitlyn Vogtner Trainor, speech-language pathologist at Loyola University Health System. "Lapses in development can also help identify medical conditions.”
“Challenges with speech and language are likely to have an impact on the child’s overall development including in the areas of socials skills, academia and even can impact a child’s behavior,” says Kathleen Czuba, speech-language therapist, Loyola University Health System. “The earlier a child's speech and language problems are identified and treated, the less likely it is that problems will persist or get worse.”
Babytalk, which includes higher-pitched voices and a wider range of pitches, is sometimes known as "motherese," partly because most research on parent-child interactions has traditionally focused on the mother's role. Scientists study this common behavior because they want to understand what role such speech patterns play in children’s language acquisition. But in an era of increased paternal involvement, researchers are investigating whether fathers modify their speech in the same way mothers do.
A team of NYU neuroscientists has identified a part of the brain exclusively devoted to processing speech, helping settle a long-standing debate about role-specific neurological functions.
Studies have shown that individuals with hearing loss or who are listening to degraded speech – think of a loud room -- have greater difficulty remembering and processing the spoken information than individuals who heard more clearly. Now researchers are investigating whether listening to accented speech similarly affects the brain's ability to process and store information. Their preliminary results suggest that foreign-accented speech, even when intelligible, may be slightly more difficult to recall than native speech.
Although the human ability to write evolved from our ability to speak, writing and talking are now such independent systems in the brain that someone who can’t write a grammatically correct sentence may be able say it aloud flawlessly.
A new study by a linguistics professor and an alumnus from The University of Texas at Austin sheds light on a well-known linguistic characteristic of autistic children — their reluctance to use pronouns — paving the way for more accurate diagnostics.
Wichita State University grad student Jennifer Francois is conducting research that studies the ways in which infants' eyes track their mothers' faces -- a small detail that can have a big impact on a child's foundation for future language development.
Crowdsourcing – where responses to a task are aggregated across a large number of individuals – can be an effective tool for rating sounds in speech disorders research, according to a study by NYU’s Steinhardt School of Culture, Education, and Human Development.
Mandarin-speakers rely more on tone of voice rather than on facial cues to understand emotion compared to English-language speakers. This may be a result of the limited eye contact and more restrained facial expressions common in East Asian cultures.
A statistical technique that sorts out when changes to words’ pronunciations most likely occurred in the evolution of a language offers a renewed opportunity to trace words and languages back to their earliest common ancestor or ancestors.
Crowdsourcing – where responses to a task are aggregated across a large number of individuals recruited online – can be an effective tool for rating sounds in speech disorders research, according to a study by NYU’s Steinhardt School of Culture, Education, and Human Development.
Arabic movie subtitles, Korean tweets, Russian novels, Chinese websites, English lyrics, and even the war-torn pages of the New York Times—research from the University of Vermont, examining billions of words, shows that these sources—and all human language—skews toward the use of happy words. This Big Data study confirms the 1969 Pollyanna Hypothesis that there is a universal human tendency to “look on and talk about the bright side of life.”
Monolingual infants expect others to understand only one language, an assumption not held by bilingual infants, a study by researchers at New York University and McGill University has found.
The same species of monkeys located in separate geographic regions use their alarm calls differently to warn of approaching predators, a linguistic analysis by a team of scientists reveals. The study reveals that monkey calls have a more sophisticated structure than was commonly thought.
Physical aggression in toddlers has been thought to be associated with the frustration caused by language problems, but a recent study by researchers at the University of Montreal shows that this isn’t the case. The researchers did find, however, that parental behaviours may influence the development of an association between the two problems during early childhood. Frequent hitting, kicking, and a tendency to bite or push others are examples of physical aggression observed in toddlers.