CARTE AcademyShtrahman lab members Stacy Kim and Emily Yan at the Sanford Consortium for Regenerative Medicine

TDLC Basic Science

TDLC's science is based on over a decade of research that was funded by the National Science Foundation (NSF). Our mission is to achieve an integrated understanding of the role of time and timing in learning, across multiple scales, brain systems, and social systems. The scientific goal of TDLC is therefore to understand the temporal dynamics of learning, and to apply this understanding to improve educational practice. The following links highlight some of the research projects that have been part of TDLC. They are representative of the vast portfolio of remarkable discoveries and an accomplished, ingenious, and effective community of scientists.

TDLC has a large science portfolio. Below is a sampling of projects that involve basic science research:

arrow New Brain Cells in the Hippocampus
Dr. Andrea Chiba at UC San Diego, with Rusty Gage of the Salk Institute and Janet Wiles of the University of Queensland in Australia, are investigating the role of new brain cells that are born every day in a part of our brain called the hippocampus, a structure critically important for memory. Their investigations are highly interdisciplinary, involving a combination of neurophysiological and behavioral studies, as well as tests of their theories “on the ground” using computer models of the hippocampus implemented in robotic rats.


arrow Brain Computer Interfaces (BCIs)
Dr. Virginia deSa's lab at UC San Diego studies the neural basis of human perception and learning. The deSa lab is interested in how we learn, both from a neural and computational point of view. Her team studies the computational properties of machine learning algorithms and also investigate what physiological recordings and the constraints and limitations of human performance tell us about how our brains learn. The driving philosophy behind their work is that studying both machine learning and human learning is synergistic. She also focuses on Brain–Computer Interfaces (BCIs), which process brain waves to command computers and other external devices like artificial limbs. The BCI Division of her lab investigates ways in which BCIs can assist people with injuries or diseases affecting their ability to move and communicate. For instance, she has found possible ways in which BCI might assist Parkinson’s Disease patients with walking. Click here to learn more about Dr. deSa, and here to read more about her lab.

 

arrow Sensory Information Processing, Language, and Cognitive Development in Infants
Dr. Benasich and her colleagues at the Infancy Studies Laboratory at Rutgers University - Newark (also known as The Baby Lab) use a range of techniques to examine sensory information processing, language, and cognitive development across the lifespan. In particular, they focus on the early neural processes necessary for normal cognitive and language development as well as the impact of disordered processing on infant neurocognitive status in high risk or neurologically impaired infants. Examination of auditory evoked potentials (EEG/ERPs), complex auditory brainstem response (cABR) and naturally sleeping MRI/fMRI provide converging noninvasive physiological measures to her lab’s extensive behavioral battery. Her findings are groundbreaking, as she has demonstrated for the first time that the ability to perform fine non-speech acoustic discriminations in early infancy is critically important to and highly predictive of later language development. These data further suggest that measures of rapid auditory processing ability may be used to identify and importantly, remediate infants at highest risk of language delay/impairment regardless of risk status. To read a research highlight about this study, click here!

 

arrow Deep Learning
Dr. Terrence Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. In his recently released book, The Deep Learning Revolution (The MIT Press), Dr. Terrence Sejnowski describes the way deep learning is changing our lives and transforming our economy. He explains the history and people who have led the deep learning revolution, how the field is evolving, and where it is heading. Dr. Sejnowski devotes one chapter to his research funded by the National Science Foundation through its Science of Learning Center, the Temporal Dynamics of Learning Center (TDLC). TDLC emphasizes machine learning and brain learning, two areas that are converging. Examples of research by TDLC include the automatic recognition of facial expressions, social robots for classrooms, and learning how to learn. These advances are being supercharged with deep learning and could soon lead to personalized tutors.

arrow Visual Perception and Cognition
Dr. Marlene Behrmann, Professor of Psychology at Carnegie Mellon University, specializes in the cognitive basis of visual perception, with a specific focus on object recognition. She is widely considered to be a worldwide leader in the field of visual cognition. Read about the latest research from the Behrmann Lab in an article in The Pittsburgh Post-Gazette!

 

arrow Machine Perception and Learning
Dr. Garrison Cottrell investigates the mechanisms that underlie cognition in animals and people using computational modeling. Some projects have also advanced the state-of-the-art in machine learning and computer vision. The work in the Cottrell lab is highly interdisciplinary and draws on findings in psychology, neuroscience, machine learning, computer vision, and behavioral economics. Research topics include Perceptual expertise & face recognition, deep learning (neural networks that are trainable), and object saliency & eye movements.

arrow Real-World Neuroimaging
Dr. Tzyy-Ping Jung and his team develop and validate non-invasive, mobile and multi-modal brain/body imaging, along with advanced computational approaches, to perform real-world neuroimaging and brain-computer interfaces (BCI). They realize that most research in neuroscience has been performed in a well-controlled laboratory setting, which might not translate to the highly dynamic real world, filled with ever-changing physical and cognitive circumstances. Therefore, their goal is to advance basic neuroscience research in real-world environments. Based on the neuroscientific knowledge gained from this research, they can then create fundamental translational principles to guide neuroscience-based research and theory in complex operational settings. It is hoped that this knowledge can ultimately be used to enhance human performance and resilience to physical and mental stress (e.g. cognitive monitoring, emotion detection, education/training), and to clarify neuropathogenic processes and create novel strategies to improve the prevention, diagnosis, and treatment of neurological and psychiatric diseases. Some of their projects include: nGoggle (combining virtual reality and BCI technologies to assess visual function and Glaucoma); Advancing mobile & wireless EEG/BCI technology to improve clinical research and practice in neurology, psychiatry, gerontology, and rehabilitation medicine; Problem solving, decision-making, and visual search in a VR Escape Room; BCI for drowsiness/inattention detection; User interface using brainwaves; and High-speed spelling with noninvasive BCI. Dr. Jung’s overarching vision is to create entirely new ways to enhance the lives of patients with neurological/psychiatric disorders and their families.

arrow Spatiotemporal Patterns of Neuronal Activity in the Brain
Dr. Matthew Shtrahman and his lab at at the Sanford Consortium for Regenerative Medicine study spatiotemporal patterns of neuronal activity that are generated in the brain. The team develops advanced optical techniques for recording neuronal activity in a variety of experimental models to understand how network firing codes for information, is shaped through learning, and becomes altered in diseases of the nervous system. Click here to learn more about the Shtrahman Lab. You can learn more about Dr. Shtrahman here!

arrow Studies of the Hippocampus
Dr. Lara Rangel studies the hippocampus (specifically, cells in the dentate gyrus of the hippocampus), working to provide new insight into the single cell interactions underlying the occurrence of brain rhythms measured in rodents and humans. Learn more about Dr. Rangel's research in the collaborative Neural Crossroads Laboratory, and more about Dr. Rangel herself, here.

arrow Perceptual Expertise
Dr. Isabel Gauthier heads the Object Perception Lab in the Psychology Department at Vanderbilt University. She and her colleagues are interested in how we perceive, recognize and categorize objects and shapes (such as faces, letters, cars and novel objects such as Greebles). A lot of the work in the Object Perception Lab revolves around perceptual expertise – defined as becoming very good at making perceptual judgments that started off as being very difficult – and investigates the behavioral and neural changes that occur during its acquisition.

arrow Time and timing in sensory processing and learning
Dr. Dan Feldman and his team at UC Berkeley study the function and plasticity of the cerebral cortex at the synapse, cellular, and neural systems levels. Precise timing is critical for sensory processing, from speech production and recognition to processing of visual motion. Disruption of rapid temporal processing may be the basic deficit in dyslexia and other language impairment. Recent work from Dr. Feldman's laboratory indicates that whisker inputs are also encoded with high temporal precision, and that this precision is critical for sensory representation and for plasticity in the cortex. He is currently testing how this temporal precision arises, and its perceptual importance, with the goal of understanding the neurobiological basis for temporal processing deficits, and how they may be remedied.

 

arrow Neural Processes That Underlie Active Spatial Exploration and Memory
Dr. Joe Snider performs research to better understand how the brain acts in the high dimensional world, both in health and in disease. His goal is to identify neural processes, based on EEG temporal dynamics, that underlie active spatial exploration and memory. To do so, he uses the Motion Capture Lab at UC San Diego to combine and synchronize new immersive virtual reality technologies, 3D motion capture, high density electroencephalographic recordings (EEG), and computational models to probe underlying mechanisms. In one of their studies funded by a grant from the Office of Naval Research (ONR), subjects actively explored an environment on a virtual aircraft carrier deck. The team found that cortical rhythms in the human brain recorded as subjects freely walked about a large-scale virtual environment predicted future memory for the environment. 

 

arrow Personalized Review Improves Students’ Long-Term Knowledge Retention
Dr. Mike Mozer and Dr. Harold Pashler have developed a software tool that provides individualized review of course material to middle-school students. They found that using the software produces a 16.5% boost in retention of complete course content one month after the term’s end, relative to current educational practice. Individualized review also leads to a 10% improvement over a more generic one-size-fits-all review strategy. 

 

arrow Neurophysiological processes underlying prosocial behaviors
Dr. Laleh Quinn and her colleagues are studying the neurophysiological processes underlying prosocial behaviors such as helping, cooperating, and reciprocating.  The team is looking at social behaviors between pairs of rats and also between rats and robots.  They are also examining the nature of shared physiological responses between rats during helping tasks in an attempt to understand the extent to which rats can sense the affective state of a conspecific.  Multiple state of the art techniques are utilized to achieve these ends, including simultaneous wireless single unit and field potential recordings of brain signals, flexible electrode recording of bodily responses, and 3D tracking of dyadic behaviors.