## Model Selection and Model Averaging

In many fields, researchers are interested in comparing different models that describe possible relationships in their data. In this workshop, we will examine one approach to identifying models that best represent the amount of data based evidence useful in comparing a set of models developed by a researcher attempting to address a specific research question.

We will discuss the basic approach to modeling which begins with developing specific scientific hypotheses. Next, the researcher decides what responses will be measured to examine these hypotheses. Hypotheses are then framed by developing a set of models that relate responses to a set of explanatory factors or variables. A necessary part of these models is identifying appropriate statistical distributions for each response variable. Ideally, these initial steps are done before data is collected, but even if this isn’t the case, it’s possible to do them once a given data set has been collected.

Once the statistical distribution of the response variable has been specified, the density function is used to construct a likelihood function that gives a range of values for parameters in the model as a function of the data that has been collected. The likelihood function is used to calculate a measure of statistical evidence called the Akiake Information Criterion (AIC). The AIC can be used to compare different models and provide model weights which express the relative amount of evidence in the data for a particular model. In some situations, it is possible to obtain estimates of parameters averaged across all likely models. We will explore statistical software in R that can be used to do these calculations.

This workshop requires a basic understanding of statistics at the level of a one semester statistical methods course. Familiarity or prior exposure to the R programming language is helpful.