Home Press Room Events

2015 TDLC Boot Camp - Potential Week-Long Project Descriptions

 

SIN:

  1. RUBI-PAL (Perception, Action, Learning) (Deborah Forster & Mohsen Malmir)
    Recognizing Objects - Interacting for Learning: Given the addition of Active Object Recognition (AOR) to RUBI-6 - the interaction space has changed. Now RUBI can ‘discuss’ object naming both through flat touch screen ‘belly’ (2D-virtual objects) and in Give-N-Take exchange (3D-physical objects.) Team will explore the P.A.L. space and work towards design/schedule a ‘game’ that takes advantage of these changes. This project can be biased based on the interests / (non) comfort zone of team members. It can involve ONE or a combination of the following GOALS:
    1. Perception: modifying/improving the current AOR through machine learning, and/or
    2. Action: choreographing the integration by HRI designing & programming a new “game”, and/or
    3. Learning: designing a study (to a working lab prototype) that will address object naming skill development
  2. Recognizing Movement - Taking Action : The GOAL is for RUBI to recognize a few types of movement and act accordingly. RUBI should be responsive to movements around her. For example, a blob moving towards RUBI can mean that someone is trying to hand-in an object or to reach to touch RUBI's face, arms etc. If there is a large movement around RUBI, it could mean that people are standing in front of her in a group. We will choose and restrict the patterns to a couple or even one to accommodate the one week project. If RUBI can recognize one type with high certainty, e.g. she can recognize if an object is moving towards her, she can extend arms to try to grab it. The METHOD to do that could involve deep nets, or classical vision pipeline, depending on people's interest.
    [Questions? email Deborah Forster - forstermobu@gmail.ucsd.edu ]

  3. EEG (Gedeon/Alvin Li)
    Students can have access to the Dyadic EEG/MoCap facility in the Cognitive Development Lab, and all materials/documentation for running pilot participants on a modified version of the ‘bubble-popping’ turn-taking task. Alvin Li and possibly 1-2 experienced undergraduates can provide limited assistance; Gedeon will be available some days for consultation and planning. A more limited project - processing existing EEG data sets - is also possible.

  4. Using ChronoSense and ChronoVis (Nadir Weibel)
    This is not actually a “project”, as Nadir will be away next week and cannot oversee a project. However, he will make the various instruments (not the lab-in-a-box per se) and the software available for people to use if they will aid someone’s project. Deborah Forster will have access to this.

SMN:

  1. Eye-tracking (Leanne)
    Students will collect and analyze data on a variant of the joint attention eye tracking task from Siller and colleagues (using EyeLink 1000). Learn to collect eye tracking data, and observe the effects of tweaking parameters in participants’ data.
  2. EEG and vision (Joe Snider)
    Students will learn to collect and analyze simultaneous high density EEG and eye tracking data and how to present accurately calibrated visual stimuli. On the hardware side, we will use the BrainVision ActiveII to record 64-channel EEG and the EyeLink 1000 to record eye movements. For software, we will use the python based Vizard 5.0 VR system to coordinate the hardware, timelock events, and present calibrated visual stimuli. The collected EEG data will be analyzed to remove movement artifacts (through ICA) and to identify the prominent visual evoked potential that occurs after eye movements (ERP). The project will occur in the Motion Capture/Brain Dynamics lab, and students will have the opportunity to familiarize themselves with the equipment and techniques we use to study movement and EEG (http://tdlc.ucsd.edu/research/research-facilities-mocap.html).


PEN:

  1. MTURK (Vicente)
    Students will replicate the Vanderbilt Expertise Task using PsiTurk and jsPsych. Should be able to to get it running on amazon and analyze some preliminary data. The stimuli are available here (http://gauthier.psy.vanderbilt.edu/resources).
    Note you will need to make an AWS account, code the experiment up, run it, and analyze the data, and replicate the original analysis. I have some notes and other resources for getting started, hosting a server, we can run under my IRB, etc.

    Time permitting, I’m also interested in multidimensional scaling, and it would be awesome if we could correlate individual differences with performance on individual VET categories but we’re not there yet.

Modeling / Imaging:

  1. Ben: Making public data available via the nidata python package, then implementing a machine learning analysis on neuroimaging data using the nilearn and scikit-learn packages.
    1. Suggested datasets: NeuroVault (fMRI group data, unthreshholded, from many different studies), Human Connectome Project (unprocessed, individual subject MRI, fMRI, diffusion MRI, resting state MRI); ABIDE (individual subject pre-processed MRI, fMRI, resting state data from autism & control populations). EEG, single unit recordings, and others can be available.
      1. See nidata spreadsheet for some possible data sources.
    2. First, implement a “fetcher” in the nidata Python package (learn git, github, Python) that downloads the data locally (on an as-needed basis).
    3. Next, use the nilearn and/or scikit-learn Python packages to do some interesting analyses on the data.
      1. Could use resting state data to investigate functional networks as a function of age
    4. Finally use nilearn and other Python visualization libraries to create static (Python) and/or dynamic (Javascript/D3.js) visualizations. You may contribute to these packages as well (e.g. nbpapaya)
  2. Tomoki: Train your own multi-layer network (3 to 5 layers) to solve a classification problem.

    Two ideas (either a or b):
    1. Participate in the Kaggle competition: https://www.kaggle.com/c/liberty-mutual-group-property-inspection-prediction and make predictions using the network. How well does it do compared to other top-performers?

    2. Use the standard (MNIST) dataset to see the effect of regularization techniques. Compare the classification performance between networks that trained using unsupervised pre-training + supervised fine-tuning, vs just doing the supervised fine-tuning. The goal is to look at how the performance changes with different amount of unsupervised vs supervised training.

      We could think of it as “expertise” experiment: what is the difference between X years of task-specific training vs. X/2 years of general education and X/2 years of task-specific training, for instance? This experiment will be a very simple model of such differences.)