Replication & Serial Position Curve (Dan Reisberg) ch. 5

In the methods essays so far, we've talked about some of the steps needed to make sure an individual result from a particular experiment is unambiguous. We've talked about the need for a precise hypothesis, so that there's no question about whether the result fits with the hypothesis or not. We've talked about the advantages of random assignment, to make certain that the result couldn't be the product of preexisting differences in our comparison groups. We've discussed the need to remove confounds so that, within the experiment, there is no ambiguity about what caused the differences we observe.

Notice that all of these points concern the interpretation of individual results, so that each experiment yields clear and unambiguous findings. It's important to add, though, that researchers rarely draw conclusions from individual experiments, no matter how well designed the experiment is. One reason for this is statistical: A successful replication-a reproduction of the result in a new experiment-provides assurance that the original result wasn't just a fluke or a weird accident. Another reason is methodological: If we can replicate a result with a new experimenter, new participants, and new stimuli, this tells us there was nothing peculiar about these factors in the first experiment. This is our guarantee that the result was produced by the factors deliberately varied in the experiment and was not the chance by-product of some unnoticed factor in the context.

In addition, researchers generally don't repeat experiments exactly as they were run the first time. Instead, replications usually introduce new factors into the design, to ask how these alter the results. Specifically, researchers offer a hypothesis about the original result and then deduce from this hypothesis predictions about factors that should alter the data pattern. Testing these predictions allows them to test the hypothesis.

We gave an example of this method in the textbook: If people are asked to recall as many words as they can from a list they just heard, the results show a characteristic U-shaped serial-position curve. This result is easily replicated, so we know it doesn't depend on idiosyncratic features of the experimental context. We therefore want to ask, what causes this reliable pattern? One proposal, of course, is provided by the "modal model," a theoretical account of memory's basic architecture. But is this model correct?

To address this question, researchers have varied factors in the basic list-learning experiment that should, if the hypothesis is correct, alter the results. One factor is speed of list presentation: According to our hypothesis, if we slow down the presentation, this should increase recall for all but the last few words on the list. A different factor is distraction right after the list's end: Our hypothesis predicts that this will decrease the recency effect but will have no other effects. These predictions both turn out to be right.

Notice, then, that our confidence in our hypothesis rests on many results-results showing the replicability of the basic finding, and then other results testing predictions derived from our hypothesis. In the end, it's this fabric of results, easily explained by our hypothesis and not easily explained in any other way that convinces us that our hypothesis is indeed correct.