Search for a command to run...
We wrote this book to introduce graduate students and research workers in various scientific disciplines to the use of information-theoretic approaches in the analysis of empirical data.In its fully developed form, the information-theoretic approach allows inference based on more than one model (including estimates of unconditional precision); in its initial form, it is useful in selecting a "best" model and ranking the remaining models.We believe that often the critical issue in data analysis is the selection of a good approximating model that best represents the inference supported by the data (an estimated "best approximating model").Information theory includes the well-known Kullback-Leibler "distance" between two models (actually, probability distributions), and this represents a fundamental quantity in science.In 1973, Hirotugu Akaike derived an estimator of the (relative) Kullback-Leibler distance based on Fisher's maximized log-likelihood.His measure, now called Akaike 's information criterion (AIC), provided a new paradigm for model selection in the analysis of empirical data.His approach, with a fundamental link to information theory, is relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case.We do not accept the notion that there is a simple, "true model" in the biological sciences.Instead, we view modeling as an exercise in the approximation of the explainable information in the empirical data, in the context of the data being a sample from some well-defined population or process.Selection of a best approximating model represents the inference from the data and tells us what "effects" (represented by parameters) can be supported by the data.We focus on Akaike's information criterion (and various extensions) for selection of a parsimonious model as a basis for statistical inference.Later chapters offer formal methods to