Search for a command to run...
Abstract This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. “No tutoring” refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and substep-based tutoring systems. Most intelligent tutoring systems have step-based or substep-based granularities of interaction, whereas most other tutoring systems (often called CAI, CBT, or CAL systems) have answer-based user interfaces. It is widely believed as the granularity of tutoring decreases, the effectiveness increases. In particular, when compared to No tutoring, the effect sizes of answer-based tutoring systems, intelligent tutoring systems, and adult human tutors are believed to be d = 0.3, 1.0, and 2.0 respectively. This review did not confirm these beliefs. Instead, it found that the effect size of human tutoring was much lower: d = 0.79. Moreover, the effect size of intelligent tutoring systems was 0.76, so they are nearly as effective as human tutoring. ACKNOWLEDGMENTS I am grateful for the close readings and thoughtful comments of Michelene T. H. Chi, Dexter Fletcher, Jared Freedman, Kasia Muldner, and Stellan Ohlsson. My research summarized here was supported by many years of funding from the Office of Naval Research (N00014-00-1-0600) and National Science Foundation (9720359, EIA-0325054, 0354420, 0836012, and DRL-0910221). Notes For the read-only studying condition (number 5 in the list), these last two experiments used an experimenter-written text instead of a commercial textbook. Although the interaction granularity hypothesis predicts that no-tutoring instruction should be less effective than tutoring, students in this no-tutoring condition had equally high learning gains as students in the other conditions. It is not clear why they learned so well, but it may be due to a lack of fatigue because they finished their reading much more quickly than the other students, who also wrote essays, interacted with a tutor, and so on. One study (di Eugenio et al., Citation2006) analyzed behaviors of tutors and compared their effectiveness but was not included in the meta-analysis because effect sizes could not be computed.
Published in: Educational Psychologist
Volume 46, Issue 4, pp. 197-221