Home Press Room Events

2012 TDLC Summer Fellows Institute (SFI) - Week-Long Project Descriptions

 

Motion Capture and EEG

EEG patterns during planning reaching movements to spatial targets (Supervisor: Dr. Markus Plank) There have been few EEG studies examining the neural dynamics associated with naturalistic movement. In this project, you will conduct a pilot study on EEG correlates of planning targeted reaching movements. You will use the Biosemi 70 channel EEG system to measure macroscopic brain dynamics during target encoding, motor planning and execution, and the PhaseSpace motion capture system to record the 3D kinematics of that reach. After removing muscle and other artifacts from the EEG using ICA, you will perform event-related and time-frequency analyses of the EEG and see what aspects of the upcoming movement (such as direction, speed, accuracy) you are able to predict from the EEG recorded prior to the movement.

Interacting Memory Systems

iRats and Real Rats (Supervisor: Dr. Janet Wiles) (4-5 participants max.) A study of Robot/Rodent Interactions. The iRat (intelligent rat animat technology) is a robot that is designed as a tool for studies in navigation, cognition, and neuroscientific research. iRat is the approximate size of an adult rat and has visual, proximity, and odometry sensors that are integrated with a differential drive and computer that allows the robot to navigate through a spatial environment. Real rats are also excellent at navigating through spatial environments and interacting with objects in the environment. This project is designed to evaluate how real, behaving rats will interact with each other, and with iRats under different conditions and across time. For example, how will real rats interact with an iRat that behaves in a mechanical, nonresponsive way, verses an iRat that moves in a more fluid motion and in a manner that is reactive to the real rat's behavior? Finally, there will be an opportunity to analyze neural recording in real rats that are engaged in these interactions.

Perceptual Expertise
Supervisor: Iris Gordon
The goal of this workshop is to introduce the trainees to the methods and analysis used in behavioral studies. The theoretic focus of this workshop will be the hallmark concepts of perceptual expertise; atypicality, holistic perception (inversion) and object categorization (novel, expert and common objects). Trainees will have the opportunity to design an experiment based on these concepts, and will learn to generate stimulus sets that test for the underlying cognitive mechanisms associated with each. Analysis will focus on learning d'prime, and a more in-depth understanding (plus concrete demonstration) of between/within subjects factors and repeated measures ANOVA. Trainees will also learn how to tailor behavioral experiments to fit the needs of other populations, such as children or individuals with special needs. Lastly, trainees will learn what it means to become an "expert" by exploring learning techniques in a small 5-day activity. No previous skills or knowledge required.

Sensorimotor

Social Interaction
Emotion Mirror for RUBI using CERT

In this project, we seek to develop a program for RUBI that can imitate facial expressions in real time. RUBI has a camera mounted on its forehead. Using this camera and CERT, we can perform face detection and facial expression analysis. RUBI also has an animated face, which runs on an iPad mounted in RUBI's head. RUBI face program has a series of parameters, which we can change to produce different facial expressions. However manually generating new 'natural' expressions using this set of parameters is a difficult task.

One way to automate the process of generating new expressions is to use the output of CERT to determine the location and properties of facial components in RUBI's face. The input to this system is a human face, which produces a natural facial expression. Using CERT, we can extract these components and produce a similar expression in RUBI's face:

  • The position of eyebrows relative to each other and to the eyes
  • The degree of which eye-lids are close
  • The shape and position of the lips and mouth corners

    In this project, we seek to utilize the output of CERT on human-generated expressions, to produce a similar facial expression in RUBI. We are assuming that RUBI has no built-in expressions. It means that we want to use the exact position of face components of the human to determine the location of RUBI's face parts. To do that, we should consider the relative size and position of face components in both human and RUBI. The system will be running in real-time, which means that it gives RUBI a pseudo-interaction ability to communicate with people.


MEG
Auditory MEG/MRI experiment training
We have room for ONE trainee (due to space and time constraints) in a MEG/MRI project. The lucky trainee will gain hands-on experience with: 1) safety issues pertaining to MRI scanning, 2) how to obtain informed consent from subjects for imaging studies, 3) understand the theoretical basis for the behavioral auditory attention, discrimination and sequencing tasks we have developed for our TDLC MEG/MRI/fMRI study, 4) how to place electrodes and prep subjects for both MEG and MRI, 5) how to use the equipment to collect behavioral , MEG and MRI data and 6) get a VERY BASIC understanding about the kind of data that are derived from MEG and MRI . What we will not be able to do is to teach a trainee how to analyze MEG and/or MRI data given the complexity of doing this and Tim's time constraints.

 

 

Initiative 3
Efficient training via attentional guidance (TA: Brett Roads)
Many human activities involve visual explorations of complex environments, e.g., baggage screening, reading mammograms, matching fingerprints, military intelligence analysis of satellite images.  In past work, we've shown that individuals can be trained more efficiently by cuing them where to look in a display.  However, these earlier studies have used simple psychological tasks, with displays consisting of isolated letters, not naturalistic images.  The goals of this project are to: (1) identify a domain of analysis (possibly fingerprints, possibly street scenes, possibly human faces), (2) determine image manipulations that are likely to guide attention to a location (contrast enhancement/suppression, saturation or color manipulation, subtle motion cues, etc.), (3) conduct an eye tracking study to show image manipulations are successful in redirecting attention, and -- time permitting, (4) showing effects on learning (when tested on novel images) from guided training.

Modeling and Analysis

"Ziggerins" are an artificial class of objects that have within-class similarity as well as between class "styles." Experiments have been performed on people learning either the class or the style. This project is to apply Cottrell's neural network expertise model to this class of stimuli and see if the model matches the data. We have already programmed versions of the model available, but variations of this project could involve using different features. The "standard" model uses gabor filters, but better versions use ICA features, and there is some reason to believe that the latter may be better at the ziggerin task. We will provide you with the model code, ziggerin stimuli as well as the original paper (linked below) with the behavioral data. 
http://tinyurl.com/8qcbue4