A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.
A recent study by Jamie Desjardins, Ph.D., an assistant professor in the speech-language pathology program at The University of Texas at El Paso, found that hearing aids improve brain function in persons with hearing loss.
Learning a second language is easier for some adults than others, and innate differences in how the various parts of the brain "talk" to one another may help explain why, according to a study published January 20 in the Journal of Neuroscience.
It takes just one-tenth of a second for our brains to begin to recognize emotions conveyed by vocalizations, according to researchers from McGill. It doesn’t matter whether the non-verbal sounds are growls of anger, the laughter of happiness or cries of sadness. More importantly, the researchers have also discovered that we pay more attention when an emotion (such as happiness, sadness or anger) is expressed through vocalizations than we do when the same emotion is expressed in speech.
Teenagers are not solely causing language change, according to Kansas State University research. Language changes occur throughout a lifetime and not just during the teenage years.
A new University of Iowa study finds babies make more speech-like sounds during reading than when playing with puppets or toys—and mothers are more responsive to these types of sounds while reading to their child than during the other activities.
Northwestern University's Nina Kraus has pioneered a way to measure how the brain makes sense of sound. Her findings have suggest that the brain’s ability to process sound is influenced by everything from playing music and learning a new language to aging, language disorders and hearing loss.
Jami Fisher, a lecturer in the University of Pennsylvania's Department of Linguistics, has a long history with American Sign Language. Both of her parents and her brother are deaf, she's Penn's ASL Program coordinator and now, with Meredith Tamminga, an assistant professor in Linguistics and director of the University's Language Variation and Cognition Lab, she's working on a project to document what they're calling the Philadelphia accent of this language.
You may believe that you have forgotten the Chinese you spoke as a child, but your brain hasn’t. Moreover, that “forgotten” first language may well influence what goes on in your brain when you speak English or French today.
In a paper published today in Nature Communications, researchers from McGill University and the Montreal Neurological Institute describe their discovery that even brief, early exposure to a language influences how the brain processes sounds from a second language later in life. Even when the first language learned is no longer spoken.
A new study done by University of Texas at Dallas researchers indicates that watching 3-D images of tongue movements can help individuals learn speech sounds. Researchers say the findings could be especially helpful for stroke patients seeking to improve their speech articulation.
Researchers have examined the relationship between the sound structures of a worldwide sample of human languages and climatic and ecological factors including temperature, precipitation, vegetation and geomorphology. The results, to be presented at ASA’s 2015 Fall Meeting, Nov. 2-6, show a correlation between ecological factors and the ratio of sonorant segments to obstruent segments in the examined languages. This supports the hypothesis that acoustic adaptation to the environment plays a role in the evolution of human languages.
After a debate that has lasted more than 130 years, researchers at Georgetown University Medical Center have found that loss of speech from a stroke in the left hemisphere of the brain can be recovered on the back, right side of the brain. This contradicts recent notions that the right hemisphere interferes with recovery.
Research published earlier this year claiming chimpanzees can learn each others’ language is not supported, a team of scientists concludes after reviewing the study.
In a new study from the University of Montreal, infants remained calm twice as long when listening to a song, which they didn’t even know, as they did when listening to speech.
University of Missouri research shows that babies’ repetitive babbles, such as "baba" or "dada," primarily are motivated by infants’ ability to hear themselves. Infants with profound hearing loss who received cochlear implants to improve their hearing soon babbled as often as their hearing peers, allowing them to catch up developmentally.
According to the African Stuttering Research Center, there is just one therapist for every 37,483 people who stutter in Africa. Florida Atlantic University is the first to provide free tele-therapy for patients who stutter in Africa.
Would a color by any other name be thought of in the same way, regardless of the language used to describe it? According to new research, the answer is yes.
An automated speech analysis program correctly differentiated between at-risk young people who developed psychosis over a two-and-a-half year period and those who did not. In a proof-of-principle study, researchers at Columbia University Medical Center, New York State Psychiatric Institute, and the IBM T. J. Watson Research Center found that the computerized analysis provided a more accurate classification than clinical ratings.