Northern Prairie Wildlife Research Center
Manipulative experimentation is a very effective way to determine causal relationships. One poses questions to nature via experiments such as selective logging. By manipulating the system yourself, you reduce the chance that something other than your treatment causes the results that are observed. Further, as emphasized by Macnab (1983), little can be learned about the dynamics of systems at equilibrium. Manipulation is helpful to understand how the systems respond to changes. Experimentation also forms the basis of what has been termed strong inference (Platt 1964), in which alternative hypotheses are devised and crucial experiments are performed to exclude 1 or more of the hypotheses.
Wildlife ecologists sometimes face severe difficulties meeting the needs of control, randomization, and replication in manipulative experiments. Many systems are too large and complex for ecologists to manipulate (Macnab 1983). Often "treatments"—such as oil spills—are applied by others, and wildlife ecologists are called in to evaluate their effects. In such situations, randomization is impossible and replication undesirable. Methods for conducting environmental studies, other than experiments with replications, are available (Smith and Sugden 1988, Eberhardt and Thomas 1991); among these are experiments without replications, observational studies, and sample surveys.
Replication is particularly difficult with experiments at the ecosystem level, which are more complex but also more meaningful than experiments at microcosm or mesocosm levels, where replication is more feasible (Carpenter 1990, 1996; Schindler 1998). Experiments lacking replications can be, and indeed often have been, analyzed by taking multiple measurements of the system and treating them as independent replicates. This practice was criticized by Eberhardt (1976) and Hurlbert (1984), the latter naming it pseudoreplication. I address this topic more fully below.
Observational studies lack the critical element of control by the investigator, although they can be analyzed similarly to an experimental study (Cochran 1983). One is less certain that the presumed treatment actually caused the observed response, however. In lieu of controlled experimentation, one can (1) reduce the influence of extraneous effects by restricting the scope of inference to situations similar to the one under observation; (2) employ matching, by which treated units are compared with units that were not treated but in other regards are as similar as possible to the treated units; or (3) adjust for the effects of other variables during analysis, with methods such as analysis of covariance (Eberhardt and Thomas 1991).
Longitudinal observational studies, with measurements taken before and after some treatment, generally are more informative than cross-sectional observational studies, in which treated and untreated units are studied only after the treatment (Cox and Wermuth 1996). (Of course, measurements on experimental and control units before and after treatments are highly desirable in experimental studies, as well as observational studies.) Intervention analysis is a method used to assess the effect of some distinct treatment (intervention) that has been applied to a system. The intervention was not assigned by the investigator and cannot reasonably be replicated. One approach is to model the system as a time series and look for changes subsequent to the intervention. That approach was taken with air-quality data by Box and Tiao (1975), who sought to determine how ozone levels might have responded to events such as a change in the formulation of gasoline.
Sometimes it is known that a major treatment will be applied at some particular site such as a dam to be constructed on a river. It may be feasible to study that river before as well as after the dam is constructed. That simple before-and-after comparison suffers from the weakness that any change that occurred coincidental with dam construction, such as a decrease in precipitation, would be confounded with changes resulting from the dam, unless the changes were specifically included in the model. To account for the effects of other variables, one can study similar rivers during the same before-and-after period. Ideally, these rivers would be similar to and close enough to the treated river so to be equally influenced by other variables but not influenced by the treatment itself. This design has been called the BACI (before-after, control-impact) design (Stewart-Oaten et al. 1986, Stewart-Oaten and Bence 2001, Smith 2002) and is used for assessing the effects of impacts.
It is difficult for investigators to manipulate large and complex systems such as ecosystems. But wildlife managers, as well as those who manage ecosystems for other objectives such as timber production, do so frequently. This disparity between investigators and managers led Macnab (1983) to recommend that management activities be viewed as experiments that offer opportunities to learn about large systems. Actions taken for management benefits generally lack controls, randomization, and replication; such shortcomings can be remedied by incorporating these features into the experiment. Key assumptions should be identified and stated as hypotheses, rather than treated as facts. The results of management actions, even if they show no effect, should be measured and reported.
The adaptive resource management approach blends the idea of learning about a system with the management of the system (Walters 1986, Williams et al. 2002). The key notion, which moves the concept beyond a "try something and if it doesn't work try something else" attitude, is that knowledge about the system becomes one of the products of the system that is to be optimized.
Sample surveys differ from experiments in that one endeavors either to estimate some characteristic over some domain—such as the number of mallards in the major breeding range in North America—or to compare variables among groups—such as the median age of hunters compared with nonhunters.