Listening experiments reveal important data

Listening experiments reveal important data

MADRID (EFE).—An international team of researchers has succeeded in describing the process by which the human brain distinguishes between music and speech, which will allow for improved therapies for people with language or hearing disorders, such as aphasia, according to a study published in the journal “Plos Biology.”

Despite human beings’ familiarity with music and speech, scientists have not yet understood how they are able to automatically identify one thing from another.

To find out, researchers from the Autonomous University of Mexico, the Chinese University of Hong Kong, New York University and the Ernst Strüngmann Institute for Neuroscience in Frankfurt, Germany, conducted four auditory experiments with 300 people. Participants were asked to distinguish between very ambiguous sounds, whether they believed they were music or conversation.

Analysis of the patterns used by participants to classify clips as music or speech showed how the speed and regularity of the sound was what made them spontaneously differentiate between one thing and another. Thus, participants identified sounds with slower frequencies (less than 2 hertz) and more regular modulation as music, and considered clips with higher frequencies (over 4 Hz) and more irregular modulation to be conversations.

“The results showed that the auditory system uses surprisingly simple and basic acoustic parameters to distinguish between music and speech,” said one of the authors, Andrew Chang, a psychology researcher at New York University.

“In general, slower, steadier sound clips of pure noise sound more like music, while faster, more irregular clips sound more like speech,” he adds.

The researchers believe that their study will serve to improve the treatment of people with hearing and language disorders, especially those who need to recover the ability to speak in cases of aphasia, a disease that affects approximately 1 in 300 people.

Along these lines, melodic intonation therapy, which consists of teaching patients to sing what they want to say, using their intact “musical mechanisms” to avoid damaged speech processes, is one of the most promising for re-teaching people with aphasia, usually as a result of a stroke, to speak.

Knowing what makes music and speech different in the brain was essential for designing effective rehabilitation programs, such as melodic intonation therapy, the authors conclude.

#Listening #experiments #reveal #important #data
2024-07-31 06:35:06

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.