The Gamelan Project: Teaching the Gamelan instrument to elementary school children to study synchrony

Research Highlights

Over the past decade, the National Science Foundation has supported UC San Diego’s Science of Learning Center known as the Temporal Dynamics of Learning Center or “TDLC”. It was the focus of TDLC to achieve an integrated understanding of the role of time and timing in learning, across multiple scales, brain systems, and social systems. The scientific goal of TDLC was therefore to understand the temporal dynamics of learning, and to apply this understanding to improve educational practice. The following links highlight many of the research projects that were part of TDLC. They are representative of the vast portfolio of remarkable discoveries and an accomplished, ingenious, and effective community of scientists.

2016-2017 Highlights



Video Game Training to Improve Eye Gaze Behavior in Children with Autism
Year: 2016-2017; Principal Investigator: Leanne Chukoskie
Network: Interacting Memory Systems Network (IMSN)
Researchers devise experiments to improve the motor planning and execution capabilities of children with autism. Using eye tracking technology, they collaborated with a developer to create a set of video games which use eye gaze as the controller to steer spaceships, blow up mushrooms and play whack-a-mole. So far, preliminary results have been promising. Subjects have shown improvements in other fixation and spatial attention tasks after daily videogame training. (NSF Highlight 2017)





SIMPHONY Study - Studying the Influence Music Practice Has On Neurodevelopment in Youth (SIMPHONY)
Year: 2016-2017; Principal Investigator: John Iversen
Network: Social Interaction Network (SIN)
How does musical training influence the child's brain and the development of skills like language and attention? The Neurosciences Institute, UC San Diego, and the San Diego Youth Symphony have formed a new partnership to address these questions. They are recruiting children between 5 to 8 years of age who receive or plan to receive instrumental/vocal music instruction to participate in the SIMPHONY study. (NSF Highlight 2017)




Domain-specific and domain-general individual differences in visual object recognition

Year: 2016-2017; Principal Investigator: Isabel Gauthier
Network: Perceptual Expertise Network (PEN)
Researchers found that performance on Novel Object Memory Tests (NOMTs) varied just as much as on familiar object, but showed more shared variance across each other (about 25%) than is typically observed among familiar object tests (about 11%). Importantly, they verified that the ability measured in the NOMTs is not explained by cognitive skills, because shared variance between NOMTs remained unchanged after controlling for performance on various measures of general intelligence. (NSF Highlight 2017)




Using game-based technology to enhance real-world interpretation of experimental results
Year: 2016-2017; Principal Investigator: Leanne Chukoskie
Network: Interacting Memory Systems Network (IMSN)
Research on how the brain combination of visual, auditory and movement information are typically conducted in a tightly controlled albeit rather impoverished environment. Virtual reality (VR) presents a unique opportunity to maintain stimulus control in a way that places the observer in a truly immersive environment. (NSF Highlight 2017)





The Science of Learning Research Center researchers teach science through making music, and receives continued support from the National Science Foundation

Year: 2016-2017; Principal Investigator: Victor Minces and Alex Khalil
Network: Social Interaction Network (SIN)
If you live in San Diego and you noticed a surge in metalophonic sounds in your neighborhood, this might be the reason: as part of the Science of Learning Research Center's commitment to bring high quality science education to the community, cognitive scientists Victor Minces and Alexander Khalil have been working with K-12 schools to teach science through the science of music. In this program, called Listening to Waves, the students actively learn the science of waves and perception as they create electronic music and build musical instruments. (NSF Highlight 2017)



Face Camp: A chance for children to explore the science of face recognition

Year: 2016-2017; Principal Investigator: Jim Tanaka
Network: Perceptual Expertise Network (PEN)
Researchers at the University of Victoria have developed an innovative model in STEM education blending scientific research with scientific outreach. At their annual summer Face Camp, children are introduced to the psychology and neuroscience of face recognition. Conducted at the University of Victoria in British Columbia, and organized by the Science of Learning Research Center's Dr. Jim Tanaka, Face Camp is a free, one day workshop where typically developing and special needs children learn about the "science, art, and fun" of human face recognition. (NSF Highlight 2017)





Brain waves during sleep in human infants are differentially associated with measures of language development in boys and girls

Year: 2016-2017; Principal Investigator: Sue Peters and April Benasich
Network: Sensorimotor Network (SMN)
Science of Learning Research Center researchers, Drs. Sue Peters and April Benasich, at the Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, have shown that brain rhythms in the sleep spindle range, differ between boys and girls, this difference is prominent on the left side of the brain, and is associated with language measures. (NSF Highlight 2017)



Neuromodulator Acetylcholine increases the capacity of the brain perceive the world
Year: 2016-2017; Principal Investigator: Victor Minces and Andrea Chiba
Network: Interacting Memory Systems Network (IMSN)
Members of the Science of Learning Research Center worked synergistically to unveil a surprising effect of acetylcholine: When acetylcholine is very active, the neurons in the visual cortex of an animal become more independent of each other. This increased independence boosts the capacity of the visual cortex to represent visual stimuli. (NSF Highlight 2017)




Making Games with Movement

Year: 2016-2017
Thirty-five high school and college students participated in a "Making Games with Movement" Hackathon, held June 22-23, 2017. The students made video games that required movement as input. (NSF Highlight 2017)





Understanding how the brain represents and processes complex, time-varying streams of sensory information
Year: 2016-2017; Principal Investigator: Dan Feldman
Network: Sensorimotor Network (SMN)
Dan Feldman and his team at UC Berkeley, members of the NSF-funded Science of Learning Research Center, have made an important advance in understanding how the brain represents and processes complex, time-varying streams of sensory information. This issue is central for understanding how organisms recognize temporal patterns of sensory input, including in speech perception. (NSF Highlight 2016-2017)

 

2015-2016 Highlights



Training Facial Expressions in Autism
Individuals with Autism Spectrum Disorder (ASD) learn to produce accurate facial expressions and remediate expression skills by playing FaceMaze-- a fun, gamified expression training platform utilizing real faces and Emotient's real-time facial expression recognition feedback. After FaceMaze training, preliminary analysis shows that participant improve in their abilities to perceive and produce facial emotions.





Understanding the neural code that supports the individuation of similar faces
Researchers from Carnegie Mellon University have shown that it is possible to reconstruct a novel face image based on the observer's behavioral or neural response to a very large set of homogeneous faces (Nestor, A., Plaut, D. C. and Behrmann, M.). From a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach and, from a theoretical perspective, the current results provide key insights into the nature of high-level visual representations.

 




Thickness of Cortical Grey Matter Predicts Face and Object Recognition
Sophisticated techniques allowed for segmentation of human grey matter and estimates of regional cortical thickness. Individual differences in the cortical thickness of pea-sized regions in the inferior temporal could be predicted by behavioral recognition performance on faces and objects. While subjects with a thicker cortex performed better with vehicles, those with a thinner cortex performed better with faces and living objects.

 




the Science of Learning Research Center Researchers Advocate for Science of Learning in Washington DC (2015-16)
In June and September 2015, Science of Learning Research Center (the Science of Learning Research Center) scientists and trainees met with various elected officials and federal agency leadership to advocate for support for Science of Learning research, training, translation and Science, Technology, Engineering and Math (STEM) education and diversity initiatives.

 





Plasticity in Developing Brain:
Active Auditory Exposure Impacts Prelinguistic Acoustic Mapping

Researchers at the Infancy Studies Laboratory at the Center for Molecular and Behavioral Neuroscience, using a series of 8-10 minute experimental sessions with babies ages four to seven months, discovered a way to help them organize the brain pathways that will help them perceive language.





Teaching Emotional Skills to Children with Autism

the Science of Learning Research Center's Jim Tanaka at the University of Victoria is partnering with Marni Bartlett at the University of Victoria to create an exciting, innovative software game to help children with autism spectrum with their facial emotions using a state-of-the-art computer technology, Emotient Analytics.




A New Test for Individual Differences Research in Face Recognition

Researchers in the NSF-sponsored Science of Learning Research Center have developed a new task for measuring individual differences in holistic face processing, The Vanderbilt Holistic Face Processing Test (VHPT-F).

 

2014-2015 Highlights




Brain Research Shows Different Pathways Are Responsible for Person and Movement Recognition

Researchers from University College London (UCL), Carnegie Mellon University and UC San Diego have found that the ability to understand different movements, such as walking and jumping, engages different brain mechanisms from those that are used to recognize who is initiating the action (Gilaie-Dotan, S., Saygin, A.P., Lorenzi, L.J., Rees,G. and Behrmann, M). In the study, individuals with lesions to the ventral aspects of the visual pathway evinced normal biological motion perception despite their marked impairments in form perception.




Using Automated Facial Expression Recognition Technology to Distinguish Between Cortical and Subcortical Facial Motor Control
Researchers at UC San Diego, University at Buffalo, and University of Toronto, have developed a computer vision system that distinguishes faked from genuine facial expressions of pain. The system outperformed human observers, who had at most a 55 percent success rate, even with training, whereas a computer vision and pattern recognition system was accurate about 85 percent of the time.




the Science of Learning Research Center's First MOOC Yields a Staggering Number of Students on Coursera!
the Science of Learning Research Center's Dr. Terry Sejnowski and Visiting Scholar Dr. Barbara Oakley have put together a Massive Online Open Course (MOOC) for Coursera on "Learning How to Learn: Powerful mental tools to help you master tough subjects."




Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance
New research from a team of researchers at Vanderbilt University, part of the Temporal Dynamics Learning Center supported by NSF, used functional magnetic resonance imaging (fMRI) to study the structural correlates of face and object recognition ability.




the Science of Learning Research Center Researchers Advocate for Science of Learning in Washington DC (2014)
During the week of Society for Neuroscience Annual Meeting in Washington DC this past November, Science of Learning Research Center (the Science of Learning Research Center) scientists and trainees met with various elected officials and federal agency leadership to advocate for support for Science of Learning research, training, translation and Science, Technology, Engineering and Math (STEM) education and diversity initiatives.




A New Method Applied to Kinematic Data Reveals Hidden Influences on Reach and Grasp Trajectories
Researchers at the University of Victoria (Canada) have developed new statistical procedures that allow them to measure the influence of competing action intentions on the execution of a reach and grasp response. This method is capable of detecting perturbations as small as a fraction of a degree in the rotation of the hand and as little as a few millimeters in its position.


 

2013-2014 Highlights



Personalized Review Improves Students’ Long-Term Knowledge Retention
A software tool that provides individualized review of course material to middle-school students produces a 16.5% boost in retention of complete course content one month after the term’s end, relative to current educational practice. Individualized review also leads to a 10% improvement over a more generic one-size-fits-all review strategy.




Face Perception
the Science of Learning Research Center’s Kao-Wei Chua, Jennifer Richler and Isabel Gauthier from Vanderbilt University have discovered that the special strategy used to look at faces can be altered in just a few hours of training.






Neural Systems for the Visual Processing of Words and Faces
Using behavioral and electrophysiological measures, in adults, Eva Dundas, David Plaut, and Marlene Behrmann observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces.






Predicting Memory from EEG
the Science of Learning Research Center's Eunho Noh and Virginia de Sa at the University of California, San Diego and Grit Herzmann* and Tim Curran at the University of Colorado, Boulder have found that they can predict (with accuracy of 57.2%) whether someone will remember an upcoming picture from the voltage recorded at the scalp (electroencephalography (EEG)) prior to the picture presentation.




Temporal Coding in the Dentate Gyrus
the Science of Learning Research Center’s Lara Rangel, Andrew Alexander, Brad Aimone, Janet Wiles, Rusty Gage, Andrea Chiba, and Laleh Quinn have discovered that granule cells in the dentate gyrus of the hippocampus encode the temporal separation between experiences that occur over long periods of time.




Attention in Children is Related to Interpersonal Timing
the Science of Learning Research Center researchers led by Victor Minces and Alexander Khalil, have found that children’s ability at interpersonal timing, or synchrony, is related to attention as measured by cognitive tests and teacher questionnaires.




Human-Robot Interaction (HRI) as a Tool to Monitor Socio-Emotional Development in Early Childhood Education
the Science of Learning Research Center’s Moshen Malmir, Deborah Forster and Javier Movellan in the Machine Perception Laboratory, in collaboration with Kathryn Owens and Lydia Morrison from UC San Diego’s Early Childhood Education Center (ECEC), demonstrated (Malmir et al., 2013; Movellan et al., 2014) the potential of using social educational robots in the classroom, not only successfully matching staff’s independent evaluation of children’s game preference, by successfully capturing affective behavior (via facial expression recognition,) but also monitoring relational aspects of spontaneous behavior among young children.


Tarr and Lebrecht


Micro-valences: perceiving affective valence in everyday objects
Sophie Lebrecht, Moshe Bar, Lisa Feldman Barrett, and Michael J. Tarr (2013)

New research from Carnegie Mellon University's Center for the Neural Basis of Cognition (CNBC) shows that the brain's visual perception system automatically and unconsciously guides decision-making through something called valence perception. Valence — defined as “the positive or negative information automatically perceived in the majority of visual information” — is a process that allows our brains to quickly make choices between similar objects. The researchers conclude that “everyday objects carry subtle affective valences – ‘micro-valences’ – which are intrinsic to their perceptual representation.”


Gary Behrmann, Dundas, Plaut


Learning to read may trigger right-left hemisphere difference for face recognition
Marlene Behrmann, Eva Dundas, David Plaut - Carnegie Mellon University

Whereas, in this study, adults showed the expected left and right visual field superiority for face and word discrimination, respectively, the young adolescents demonstrated only the right field superiority for words and no field superiority for faces. Although the children's overall accuracy was lower than that of the older groups, like the young adolescents, they exhibited a right visual field superiority for words but no field superiority for faces. Interestingly, the emergence of face lateralization was correlated with reading competence, measured on an independent standardized test, after regressing out age, quantitative reasoning scores and face discrimination accuracy.
More


Paula Tallal and Beth Rogowsky


Computer-Based Cognitive and Literacy Skills Training Improves Students' Writing Skills
B. Rogowsky, P. Papamichalis, P., L. Villa, S. Heim, P. Tallal (2013)

A study conducted at Rutgers University finds that cognitive and literacy skills training improves college students' basic writing skills.


 

2013 and earlier

Gary Cottrell


Toward optimal learning dynamics
Garrison W. Cottrell and the Science of Learning Research Center

As outlined in a recent Science article coauthored by members of the the Science of Learning Research Center and LIFE centers, transformative advances in the science of learning require collaboration from multiple disciplines, including psychology, neuroscience, machine learning, and education. the Science of Learning Research Center has implemented this approach through the formation of research networks, small interdisciplinary teams focused on a common research agenda. 


Dan Feldman


A review of STDP by the Science of Learning Research Center Investigator Feldman is featured in Neuron
The Spike-Timing Dependence of Plasticity (Neuron, 8/23/12)

It has been 15 years since the discovery of spike timing-dependent plasticity (STDP), which has become a leading candidate mechanism for information storage and learning in the nervous system. This review summarizes our current understanding of STDP, from its varied forms and cellular mechanisms to theoretical properties and to the evidence that it contributes to plasticity and learning in vivo. Read the abstract



Isabel Gauthier


"Sex matters: Guys recognize cars and women recognize birds best"
September 17, 2012

Results published online in the Vision Research journal describe research by Isabel Gauthier and her colleagues that reveal sex effects in object recognition. (The Vanderbilt Expertise Test Reveals Domain-General and Domain-Specific Sex Effects in Object Recognition.) Watch the interview



ESconS



Cortical Rhythms in the Human Brain During Free Exploration are Linked to Spatial Memory

the Science of Learning Research Center researcher Joe Snider, trainee Markus Plank and PI Howard Poizner, along with colleagues Gary Lynch and Eric Halgren, are participating in exciting work in the Motion Capture Lab at UC San Diego. By combining motion capture, virtual reality and high density electroencephalographic recordings (EEG), their goal is to identify neural processes, based on EEG temporal dynamics, that underlie active spatial exploration and memory. In a study funded by a grant from the Office of Naval Research (ONR), subjects actively explore an environment on a virtual aircraft carrier deck presented with a lightweight head-mounted display (HMD) having a total of 12 miniature monitors. The researchers are finding that cortical rhythms in the human brain recorded as subjects freely walk about a large-scale virtual environment predict future memory for the environment. http://vimeo.com/28649538.





Different kinds of visual learning reflect different patterns of change in the brain
In two recent articles, Yetta Wong, Jonathan Folstein and Isabel Gauthier, members of the Temporal Dynamics Learning Center supported by NSF, compared two different kinds of learning traditionally called “perceptual expertise” and “perceptual learning”.



LFI!



Let's Face It! and CERT help autistic children (2012)
the Science of Learning Research Center's Jim Tanaka (University of Victoria) and Marni Bartlett (UC San Diego's Machine Perception Lab) have joined forces to develop a new state-of the-art intervention treatment to help children with autism.






Early Interventions: Baby Brains May Signal Later Language Problem (2011)
Research by the Science of Learning Research Center investigator April Benasich and team suggests that the way infants only a few months old process sound in their brains is highly predictive of later language development in normally developing children as well as children at risk for language disorders.

 

 





 

 

 





The Gamelan Project: The ability of a child to synchronize correlates with attentional performance(2011)
The gamelan project pilot study demonstrates that the ability of a child to synchronize with an external source in a group setting correlates significantly with established measures of attentional performance.





Computer-Based Cognitive and Literacy Skills Training Improves Students' Writing Skills(2011)
Research by the Science of Learning Research Center investigator April Benasich and team suggests that the way infants only a few months old process sound in their brains is highly predictive of later language development in normally developing children as well as children at risk for language disorders.





the Science of Learning Research Center, Music and the Brain (2011)
There is growing interest among the Science of Learning Research Center scientists in the effects of music on the brain.

 

 







Partnership between UC San Diego, The Neurosciences Institute, and the San Diego Youth Symphony - Fall 2011.

 

 







SCCN and Music/Brain Research - 2011

Quartet for Brain and Triothe Science of Learning Research Center investigator Scott Makeig, Director of Swartz Center for Computational Neuroscience (SCCN), is interested in integrated music into his research. He uses the Brain Computer Interface to read emotions and convert those emotions into musical tones.






Patients with congenital face blindness outperform controls on face perception test
Manuscript under review, Neuropsychologia
Collaborators: Avidan, Tanzer & Behrmann

Individuals born with face-blindness (congenital prosopagnosia), while impaired at recognizing familiar faces and even making perceptual judgments about whether two unknown faces are the same or different, are better than matched controls at detecting similarities/differences between parts of two faces in a composite face comparison task.





Holistic Processing Predicts Face Recognition
Accepted 12/10/10 for publication in Psychological Science

Collaborators: Jennifer J. Richler, Olivia S. Cheung & Isabel Gauthier

The concept of holistic processing (HP) is a cornerstone of face recognition research. We demonstrate that HP predicts face recognition abilities on the Cambridge Face Memory Test and a perceptual face identification task. Our findings validate a large body of work on face recognition that relies on the assumption that HP is related to face recognition.

Palmeri



Inverted Faces are (Eventually) Processed Holistically (in press)
Collaborators: Jennifer J. Richler, Michael L. Mack, Thomas J. Palmeri & Isabel Gauthier -- Vanderbilt University

Face inversion effects are used as evidence that faces are processed differently from objects. Nevertheless, there is debate about whether processing differences between upright and inverted faces are qualitative or quantitative.

Pashler, Mozer, Movellan



Dr. April Benasich: Four new papers in press (November/December 2010)

  • Maturation of auditory evoked potentials from 6 to 48 months: Prediction to 3 and 4 year language and cognitive abilities
  • Source localization of event-related potentials to pitch change mapped onto age-appropriate MRIs at 6 months of age
  • Involuntary switching of attention mediates differences in event-related responses to complex tones between early and late Spanish-English bilinguals
  • Reduced sensory oscillatory activity during rapid auditory processing as a correlate of language-learning impairment


Pashler, Mozer, Movellan



The Power of Study and Testing Spacing (November 2010)

Pashler, Mozer, Movellan



Neurons cast votes to guide decision-making (October 2010)
(From Vanderbilt News, October 8, 2010)




Music helps explain a paradox in research on faces and Chinese characters (2010)




Recognizing Images Using Fixations (2010)




The Gamelan Project - Exploring music and temporal perception in children (June 2010)




Enhancing facial expression recognition and production in children with autism (May 2010)




A computer vision system automatically recognizes facial expressions of students during problem solving (May 2010)



Our Rich Cognitive Abilities (February 2010)

Ah to be an Expert (February 2010)


Visual Pathways Fine Tuned over Time (February 2010)

Entrainment of Hippocampal Neurons by Theta Rhythm

The Wiring is Not Right: Congenital Prosopagnosia

Adolescents with Autism process faces as wholes but are not sensitive to configuration

Size of Infant's Amygdala Predicts Language Ability


Can You Recognize a Face with a Single Glance?
Task-driven salience: Directing Gaze for Visual Search
Your Lips, Your Eyes, Your Face!


The representation of hand actions in auditory sentence comprehension

Effect Of Gamma Waves On Cognitive And Language Skills In Children

 

 

What do we know about the color of men and women?

 

 

Machine Perception Lab PhD Student Turns Face Into Remote Control



O'Reilly Workshop on Models of High-Level Vision
Faces Studied as Parts are Processed as Wholes
It's All Chinese to Me
Faces Equally Special in Different Spatial Formats
NSF Face Camp
NIMBLE Eyes
We Have A Memory Advantage for Faces
The Musical Brain Sees Faster
The Neural Basis of Audio/Visual Event Perception

Cross-Country Data Grid
Brains R Us
Temporal Dynamics Learning Center Promotes Discussions on Collaboration
Maturation of Psychological and Neural Mechanisms Supporting Face Recognition
Knowing An Object Is There Does Not Necessarily Mean You Know What It Is
Similar Faces Show Object-like Categorization
Distribution and Support of Face Recognition Training Software

Social Interaction and the Dynamics of Learning
Learning to Become an Expert