|
Professor Shihab Shamma (ECE/ISR) is the principal investigator for "The computational and neural basis of statistical learning during musical enculturation," a three-year, $232K collaborative research grant in the NSF Cognitive Neuroscience program.
Music is everywhere, played and enjoyed in every known human culture, past and present. Most people spontaneously engage with and enjoy music. Despite major structural differences across cultures (in melodies, harmonies, timbres, rhythms, etc.), listeners are able to appreciate and rapidly learn music to which they were never exposed. Becoming familiar with—and enjoying—an unfamiliar musical culture implies acquiring implicit knowledge of the rules that govern it, for example melodies and rhythms. People also automatically build on their own knowledge when listening to music: their reaction to any music—familiar or not—reflects their own musical culture.
A musical signal can be described as a sequence of symbols. The notes of specific pitches and durations ultimately form the melody we hear. These musical sequences can be used as input to a computational model (e.g., the well-known model named IDyOM) that computes their statistics, over many large corpora of musical scores. When trained on musical scores from different cultures, the model "learns" the cultural specificities.
This project harnesses the power and analytic transparency of the model to investigate the neural and computational mechanisms that underlie musical enculturation: while people's responses to music reflect their long-term exposure to their own culture, they can also rapidly learn and enjoy a music to which they were never exposed.
It combines computational and brain research techniques to elucidate the mechanisms that form the basis for enculturation, an important type of learning and a compelling human phenomenon. Music is a cultural object, but also is a quantifiable signal. This makes music an ideal domain to investigate enculturation. Understanding musical enculturation mechanisms will help us explain how we acquire the rules and norms of cultural domains more broadly.
Related Articles:
How does the brain turn heard sounds into comprehensible language? Real-time remote reconstruction of signals for the Internet of Things ‘Priming’ helps the brain understand language even with poor-quality speech signals New UMD Division of Research video highlights work of Simon, Anderson Oct. 13-14: Workshop on New Frontiers in Networked Dynamical Systems: Assured Learning, Communication & Control NSF funding to Fermüller, Muresanu, Shamma for musical instrument distance learning using AI Narayan receives NSF funding for shared information work Addressing liver transplant geographic inequities Work on RIS-aided mmWave beamforming named a ‘best paper’ Five Clark School authors part of new 'Age of Information' book
August 24, 2023
|