Northern Prairie Wildlife Research Center

These methods date back only to the mid-1970s. They are based on theory published in the early 1950s and are just beginning to see use in theoretical and applied ecology. A synthesis of this general approach is given by Burnham and Anderson (1998). Much of classical frequentist statistics (except the null hypothesis testing methods) underlie and are part of the information-theoretic approach; however, the philosophy of the 2 paradigms is substantially different.

As part of the Methods section of a paper, describe and justify the a priori
hypotheses and models in the set and how these relate specifically to the
study objectives. Avoid routinely including a trivial null hypothesis or model
in the model set; all models considered should have some reasonable level
of interest and scientific support (Chamberlin's [1965] concept of "multiple
working hypotheses"). The number of models (*R*) should be small in most
cases. If the study is only exploratory, then the number of models might be
larger, but this situation can lead to inferential problems (e.g., inferred
effects that are actually spurious; Anderson et al. 2001). Situations with
more models than samples (i.e., *R* > *n*) should be avoided,
except in the earliest phases of an exploratory investigation. Models with
many parameters (e.g., *K* ~ 30-200) often find little support, unless
sample size or effect sizes are large or if the residual variance is quite
small.

A common mistake is the use of Akaike's Information Criterion (AIC) rather
than the second-order criterion, AIC_{c}. Use AIC_{c}
(a generally appropriate small-sample version of AIC) unless the number of
observations is at least 40 times the number of explanatory variables (i.e.,
*n*/*K* > 40 for biggest *K* over all *R* models).
If using count data, provide some detail on how goodness of fit was assessed
and, if necessary, an estimate of the variance inflation factor (*c*)
and its degrees of freedom. If evidence of overdispersion (Liang and McCullagh
1993) is found, the log-likelihood must be computed as log_{e} ()/
and used in *QAIC _{c}*, a selection criterion based on
quasi-likelihood theory (Anderson et al. 1994). When the appropriate criterion
has been identified (AIC, AIC

Discuss or reference the use of other aspects of the information-theoretic
approach, such as model averaging, a confidence set on models, and examination
of the relative importance of variables. Define or reference the notation
used (e.g., *K*, δ* _{i}*,
and

For well-designed, true experiments in which the number of effects or factors is small and factors are orthogonal, use of the full model will often suffice (rather than considering more parsimonious models). If an objective is to assess the relative importance of variables, inference can be based on the sum of the Akaike weights for each variable, across models that include that variable, and these sums should be reported (Burnham and Anderson 1998:140-141). Avoid the implication that variables not in the selected (estimated best) model are unimportant.

The results should be easy to report if the Methods section outlines convincingly
the science hypotheses and associated models of interest. Show a table of
the value of the maximized log-likelihood function (log()),
the number of estimated parameters (*K*), the appropriate selection criterion
(AIC, AIC_{c}, or QAIC_{c}), the simple differences
(δ* _{i}*),
and the Akaike weights (

Do not include test statistics and *P*-values when using the information-theoretic
approach since this inappropriately mixes differing analysis paradigms. For
example, do not use AIC_{c} to rank models in the set and then
test if the best model is significantly better than the second best model
(no such test is valid). Do not imply that the information-theoretic approaches
are a test in any sense. Avoid the use of terms such as significant and not
significant, or rejected and not rejected; instead view the results in a strength
of evidence context (Royal 1997).

If some analysis and modeling were done after the priori effort (often called data dredging), then make sure this procedure is clearly explained when such results are mentioned in the Discussion section. Give estimates of important parameters (e.g., effect size) and measures of precision (preferably a confidence interval).

Previous Section -- Frequentist Methods

Return to Contents

Next Section -- Bayesian Methods