TDLC researchers Drs. Michael Mozer and Hal Pashler are involved in a project that will allow the development of "smart" annotated online textbooks. These digital textbooks will be used "to gain a better understanding of a particular learner's state of mind and grasp of subject matter." The project, funded by a four-year, $1 million grant from the National Science Foundation 's (NSF) Neural and Cognitive Systems program as part of the BRAIN initiative , has been created by researchers at the University of Colorado Boulder, Rice University and UC San Diego.
Your team is creating software that will predict how well students will perform on tests based on what they highlight in the digital textbooks. How does this work?
We will create tools that use a student's highlights to create customized quizzes and reviews. We will collect annotations from a group of learners to draw inferences about individual users. We will use this data to infer a student's depth of understanding of facts and concepts, predict test performances and even perform scholastic "interventions" that improve learning outcomes. The idea is to reformulate selected passages into review questions that encourage the active re-construction and elaboration of knowledge.
How are the digital textbooks different from traditional textbooks?
While traditional textbooks are designed to transmit information from the printed page to the learner, contemporary digital textbooks offer the opportunity to unobtrusively gather information from learners as they read. With a better understanding of a learner's state of mind, textbooks can make personalized recommendations for further study and review.
Study participants will use online textbooks provided by the nonprofit, open-source textbook publisher OpenStax (based at Rice). How did your collaboration with OpenStax begin?
My collaborator, Rich Baraniuk, started a foundation called Openstax five years ago to support professionally-curated open-access textbooks, since textbook pricing is prohibitive. In a short time, their texts have reached significant penetration in a range of courses around the country, particulary in introductory science courses. Students using the texts range from high school advanced placement students to junior colleges to traditional four-year undergraduate institutions. It is really amazing how rapidly they have been adopted. OpenStax holds an education summit each year, and Baraniuk invited both me and Hal Pashler to participate in recent years. Baraniuk is a machine-learning researcher, and his goal has always been to develop a platform from which he could conduct research in machine learning and education.
What do you hope to gain from the study?
Because students seem compelled to highlight text as they read, it will be easy to obtain data from a large population. Psychologists studying highlighting have found little value in the act of highlighting per se to the learner, and simply re-reading highlighted passages is not nearly as effective as testing oneself. With electronic texts, new opportunities arise. First, we can use the highlights themselves to diagnose the state of understanding of the learning. A student who highlights every passage may not be picking out the essential ideas. And the patterns of highlights may tell us about a student's particular interests and focus. Second, we can use the highlighted passages to support review in a manner that is more beneficial to the student than simply re-reading the passages. We aim to reformulate the passages into questions which the text can ask the student at appropriate times to support long-term retention of the material.
How did you become interested in pursuing this line of study?
Rob Lindsey, who was funded by TDLC, conducted a brilliant study with Spanish-as-a-foreign-language students from a Denver area middle school. Rob provided software that reviewed previously learned vocabulary and skills. The students using this software for about 30 minutes a week showed a 16.5% improvement in overall course retention a month after the end of the semester, as compared to a time-matched control that reflected current educational practice. The software made personalized predictions for each student about what material they would benefit most to review, based on the student's own past history and data from the population of students. The limit of this work was that we had only a very weak signal concerning the student's state of knowledge: we know only whether a student could answer a particular question correctly or not at a particular moment. The challenge to using big-data methods in education is to glean more information about what's going on inside a student's head. This new project is an attempt to collect data from students as they are first exposed to material via an electronic text book.
University of Colorado Boulder article