Search for a command to run...
Tests for experiments with matched groups or repeated measures designs use error terms that involve the correlation between the measures as well as the variance of the data. The larger the correlation between the measures, the smaller the error and the larger the test statistic. If an effect size is computed from the test statistic without taking the correlation between the measures into account, effect size will be overestimated . Procedures for computing effect size appropriately from matched groups or repeated measures designs are discussed. The purpose of this article is to address issues that arise when meta-analyses are conducted on experiments with matched groups or repeated measures designs. It should be made clear at the outset that although this article pertains to metaanalyses of experiments with correlated measures, it does not pertain to meta-analyses of correlations. Such experimental designs, often called matched groups designs or designs with repeated measures, we call correlated designs (CDs), and their analysis is decidedly different from that of the independent groups designs (IGDs). The matched groups design in its simplest form occurs when subjects are matched on some variable and then randomly assigned by matched pairs to experimental and control conditions. The correlation for this type of CD is the correlation between experimental and control scores across matched pairs. The second type of CD is the repeated measures design, which in its simplest form tests the same subject under both experimental and control conditions, usually in random or counterbalanced orders to minimize carryover. The correlation of importance here is the correlation that commonly occurs between the repeated measures, and this correlation is often quite high in human research.