Home Press Room Events

2009 Trainee Boot Camp - Week-Long Project Descriptions

Perceptual Expertise (TA: Mayu Nishimura)

How is it that a car expert can tell the difference between a 1983 and 1984 Porsche 911, but the rest of us cannot.  Get a glimpse into the world of perceptual expertise by training yourself to become a Greeble expert! Learn about how
training changes our perceptual abilities, and whether such training would be useful for individuals with perceptual deficits.

Sensory Motor (TA: Alan Robinson)


In short: do the psychophysics lab properly, with a more theoretically motivated set of conditions. That is to say we will try to determine why the Anti-Mccollough effect induces the opposite color than would be expected from the Anti-Mccollough effect. This is theoretically interesting because the Anti-Mccollough effect casts serious doubts on previous theories attempting to explain the Mccollough effect, and other forms of contingent adaptation.

The actual experiment will be quite a bit more involved than what we did in lab, but of the same flavor.

You can read about the Anti-Mccollough effect, and why it’s so super awesome in the following link.



Motion Capture (David Peterson, et. al.)

Rewarded learning:  behavior, computational models, and EEG

In this project, you will conduct a pilot experiment on brain correlates of rewarded learning.  In the experiment, subjects use feedback to learn the relative probablistic reward contingencies of abstract visual stimuli. After working closely with the project TA to program the experiment in Presentation, you will use the Biosemi 64-channel EEG to measure macroscopic brain dynamics during the learning process.  You will also have the opportunity to use a computational reinforcement learning model to infer trial-by-trial internal variables driving the dynamics of the learning process.  Necessary skills:  comfortable with script-based programming and meticulous attention to details of experimental procedure.  Preferred skills: interest in neural correlates of learning and Matlab programming.  Ideal number of team members:  3.


Social Interaction (TA: Jake Whitehill)

Automated Coach of Motor-Control Skill

The goal is to train a fully autonomous "coach" for the Inverted Pendulum Game that was demonstrated during the SIN workshop. We will start from the data we recorded during the workshop, and apply machine learning techniques to develop a classifier that decides when to perform various actions (e.g., say "Yes", make game level harder, make game level easier). We will then deploy the classifiers we develop into an automated coach system. The project is quite open-ended and the precise direction it will take will depend on the imaginations of the participants. 


Computational Modeling (Gary Cottrell and Honghao Shan)

Project 1:

The task is to use ICA to investigate whether faces can be caricatured by amplifying their representation in ICA space. First, extract facial features using ICA; then if a face image has big values on several features, amplify those values to exaggerate the facial features.

Some interesting variations: using sparse coding instead of complete ICA to get overcomplete representations, or applying ICA on top of Gabor magnitudes to get nonlinear features, etc.

There are ads on facebook that charge a fee to cartoonize your face, so this could be of commercial value!

One "check", aside from visual inspection, whether this works or not, is that people are generally faster at recognizing a caricature compared to the original face, so a short behavioral experiment could be part of this project.

Project 2:

"Ziggerins" are an artificial class of objects that have within-class similarity as well as between class "styles." Experiments have been performed on people learning either the class or the style. This project is to apply Cottrell's neural network expertise model to this class of stimuli and see if the model matches the data. We have already programmed versions of the model available, but variations of this project would involve using different features. The "standard" model uses gabor filters, but better versions use ICA features, and there is some reason to believe that the latter may be better at the ziggerin task. We will provide you with the ziggerin stimuli as well as the original paper (in press at the moment) with the behavioral data. 

Paper abstract: http://www.journalofvision.org/8/6/883/


Interacting Memory Systems (TA: Robert Lindsey)
Project 1: Predicting the Difficulty of Learning 
Suppose that an individual has studied a set of foreign language vocabulary words, and time remains to review half of these words.  Which half should be chosen for further study?  Given a model of the individual’s memory, we could select the items that will most benefit from additional study.  

A key challenge to determining which items to select is that we do not 
know the individual's internal memory state.  If each study trial begins with a test, then one piece of information we have to work with is whether an individual successfully recalls a particular item during study.  Obviously, if an individual cannot recall an item in the study session, they are unlikely to recall the item at test.  Ideally, we woiuld like much finer grained information about the individual's memory state.  The goal of this project is to discover and exploit other indicators of an individual's memory state, in order to better predict the difficulty of learning, and to build more accurate models of an individuals' memory state.

During the lab on the first day of the bootcamp, we conducted a paired-associate learning experiment in which we found a systematic relationship between _response latencies_ during study and recall probabilities at test.  If this relationship is found for group data, then almost certainly there's an even stronger relationship in the individual data.  To get this project underway, we will reanalyze the individual data we collected from the two experiments we conducted.  

We have an even richer source of data to examine.   Pashler and Mozer ran an experiment in which participants learned paired associates during a study session and were then tested a week later.  During the study session, participants saw each item to be learned once in a study-only trial, and then each item was repeated five more times during the course of the session, in a test-study trial.  (A 'test-study' trial is one in which first the participant is tested on an item, and then is given the correct answer and has additional time to study.)  Thus, in this experiment, we have 5 responses and 5 response latencies for each item for each participant in the study session, which can be used to predict recall probability in the test session.  The bulk of the project will involved analyzing these data and determining a model that predicts an individual's performance on a specific item in the test session from their performance during the study session. We will consider models that aggregate data across individuals for a specific item, and models that aggregate across items for a specific individual.  It's unclear at present what type of model will yield the highest predictive 

Pashler and Mozer ran another interesting experiment that offers insight into data that can be diagnostic of test recall performance.  He found the following.  When individuals are tested during the study session, they sometimes recall the correct answer.  But when they fail to recall, their errors can be classified into two sorts:  failure to provide any response (errors of omission), and providing the wrong response (errors of commission). Surprisingly, Pashler found that errors of commission are predictive of easier learning.  That is, an individual is more likely to learn an item that they guessed wrong at than an item for which they were unwilling to venture any guess at all.

The goal of this project is to use the data sets we have available to  determine relationships between predictive variables (e.g., response  latencies and error type) and eventual learning (i.e., recall accuracy at  test).  Once we can exploit the relationships by using quantitative models, we can predict which items most require additional study.

If there's time, we will conduct pilot experiments to test predictions of this modeling approach, and to determine whether the 'difficult' items (the ones one which we predict failure of recall at test) benefit from study as much as the 'easy' items.

Project 2: Using Attentional Cueing to Guide Human Learning

Many cognitive tasks involve interacting with a complex visual environment, e.g., driving a car, piloting a plane, controlling
air traffic, screening baggage, and even walking down a crowded street. Expertise in such environments comes from experience over a time period of years (Rosenbloom & Newell, 1987). Becoming an expert involves at least two distinct abilities: identifying features of the environment that are task relevant in a given context, and determining the appropriate response to these features. These two abilities pose a chicken-and-egg problem. The appropriate response cannot be determined until one knows which features are relevant, but the relevance of a feature depends on its being a reliable determinant of the task-appropriate response.

A long-term goal of this project is to understand the temporal dynamics of learning to attend to relevant features in complex visual environments, and to design tutoring systems that leverage expert knowledge to train novices more efficiently. One could ask an expert to stand over the shoulder of a novice as they, say, tried to control a flight simulator, and the expert could provide guidance such as, "Check your altimeter now." However, with rapid fire perceptuomotor decision making, such interruptions are unlikely to be helpful. Furthermore, this type of guidance requires the constant presence of a vigilant expert, and assumes experts can verbalize their attentional strategies. Alternatively, we propose a novel perceptual learning paradigm:

1. We will record the eye movement behavior of an expert, and train a machine learning model to predict the expert's eye movements given the current visual context.

2. We will then place the novice in an environment, and in parallel with the novice performing the task, use the expert eye-movement model to predict where the expert will fixate at each instant.

3. We will cue the novice to the location of the expert's focus of attention using some type of salient but subtle visual cue (e.g., onset lag, motion, or brightness modulation).

From experimental work and models of visual saliency (Zhang et al., 2008), we have a basic idea what kinds of visual cues will attract attention. What we don't know is whether guiding a novice's attention to the appropriate location will facilitate learning. However, we hypothesize an affirmative answer to this question under the Guthrian view that associations are strengthened by performing them (Guthrie, 1959).

For bootcamp, we will begin an investigation with a behavioral study that will test the conjecture that saliency cueing facilitates learning in an attentional task. Here's a sample experiment that we might conduct. Participants are seated in front of a screen that tracks eye position. They begin a trial by fixating at a central point against a black background.  At the start of each trial, four color patches will appear against a textured background. Participants are told that one of the color patches is the target and they should fixate the target to complete the trial. The target is defined by color, not location,
and the target identity is contingent on the texture of the background (e.g., with a background of vertical bars, the target will be red; with a background of horizontal bars, the target will be green). We'll record number of eye movements to fixate the target, and time to fixate the target. Presumably both of these measures will decrease with practice, based on the contextual cueing phenomenon (Chun & Jiang, 1998). We'll have n blocks of trials, and in each block all of the m textures will be presented. Half of the textures will be assigned to a saliency enhancement condition, in which the target will be made more salient
via some attentional manipulation; the other half of the textures will be assigned to a control condition, in which no saliency enhancement occurs. The experimental question is this: If we test participants every few blocks via trials in which no saliency enhancement occurs, will we observe better performance on items in the saliency-enhancement condition than in the control condition?

For bootcamp, we will design and pilot this experiment.  We will explore the effect of various sorts of saliency manipulation (e.g., brightness, onset flicker, relative onset time). We will also explore the relative timing of the saliency cues to determine whether timing influences effectiveness. We conjecture that a subliminal cue such as presenting the target 20 msec before any of the other patches will be more effective in training participants than a supraliminal cue such as presenting the target 100 msec before any other patch. With the subliminal cue, participants cannot "explain away" (in the Bayesian network sense) their eye movements as being caused by the cue; we therefore suspect that the shorter latency cue will have a greater influence on learning.

Recent work convinces us that saliency cueing will achieve the desired goal. Notably, Grant and Spivey (in press) have shown that a subtle saliency cue is useful for helping participants to solve an insight problem (the tumor-and-lasers problem).