*Previous page:* Analyzing Cholesterol Dataset – Part 1

Another method of inspecting the data is to use an Analysis of Variance. As with all analysis, the appropriate model depends on the question you are asking. To begin, we will ask “do the levels of our independent variable have an effect on our dependent variable?”.

**One-Way Within-Subjects ANOVA**

Firstly, make sure you’ve uploaded the `Cholesterol.csv`

dataset. If you’ve been following along since the paired-samples *t*-test, you can simply go back to the top of the screen on MagicStat (version 1.1.3) and press `Explore`

again to begin this analysis.

**1.** Next `select a model to analyze your data`

.

For datasets with more than two levels of your independent variable, it is appropriate to use an analysis of variance model to test whether your independent variable can account for the variation in your data. In this case, we are again ignoring the independent variable of type of margarine (A or B) and only looking at the three levels of participation in our study.

Choose the `One-Way Within Subjects ANOVA (One-Way Repeated Measures ANOVA)`

model.

It is “One-Way” because there is only a single independent variable (duration of study participation) with three levels. It is a “Within Subjects” or “Repeated Measures” model because each participant in the study appears in all three levels (`Before`

, `After4Weeks`

, and `After8Weeks`

). The levels vary *within each subject* and produce *repeated measures* of each participant.

**2.** After selecting the model you will be asked `Is your dataset long format or wide format?`

.

Previews and descriptions of both wide format and long format are shown below.

If you look to the right panel of the MagicStat display, you will see a preview of our dataset and notice that is is following the wide format. Each row corresponds to one participant and contains data for each of the three levels of our independent variable.

Select `wide`

and proceed.

**3.** After specifying our model and data format, next we `Select independent variables`

for analysis. Since we are only looking at the single variable (duration of study participation) we choose all three levels from the dropdown box.

**4.** Press `Analyze`

and three tables of results will appear beneath the **Output** heading.

**Summary of sources of variance in our data**: When conducting an ANOVA it is important to keep in mind the framework being used to ask our question. In the simplest sense, an ANOVA is asking where the variation in our data is coming from. Do the levels of our independent variables account for an amount of variation in our means over and above what we would expect from pure random chance?

First, imagine a completely naive approach where we throw out all information about how the data was collected. We ignore which subject we are measuring and also ignore which level of our independent variable the observation is coming from. With this impoverished picture all we could do would be to treat the dataset as single sample and calculate it’s grand mean and variance. Why any given data point is higher or lower would be a mystery to us and we wouldn’t be able to say anything about whether or not our independent variable had an impact on our dependent variable.

*Breaking it down*

Luckily, in the case of a one-way repeated measures model we are not so naive and able to parse our variance into three potential sources:

a. **Between Group variation**: Variation due to the levels of our independent variable.

If you think about grouping data points from each of the levels of our independent variable you’d have three groups, each group having it’s own mean value. We can then compare these mean values to the grand mean of the undifferentiated naive case.

*Question*: Why are the group means different than the grand mean?

*Answer*: The Between Group means are different from our grand mean because of the level of our independent variable they were collected from. This is the logic of how an ANOVA calculation tells us about the impact of our independent variable. If our groupings did not have an impact on our dependent variable then knowing which group an observation comes from would not tell us additional information above the naive case.

b. **Subjects variation**: Variation due to individual differences between participants.

Ignoring group membership, we can instead group data points by the subject they are collected from. In this view of our dataset, each participant would have their own mean score and each of these mean scores would be different from the grand mean. Why? Because there are individual differences between people. This fact is not surprising or of particular relevance to questions about out experiment but it is useful to pull out the variance of individual differences from the other variation in our study. The statistical power of repeated measures designs is due precisely to our abilities to differentiate between group variation and individual differences.

c. **Error**: Variation of unknown or unspecified origin.

After accounting for variation in our data due to **subjects **and **between **group factors what is left is called “error” or sometimes “residual” variance. This is the stuff in our data that our model is not able to capture. We can know something about how our grouping are effecting outcomes and how individual variation effects results but what remains unexplained is called error.

**The F-statistic**

Mean Square

In the previous section we talked about sources of variation in our data. Those concepts we were talking about roughly correspond to the values in the `Sum of Square`

column. Unhelpfully, `Between`

, `Subjects`

, and `Error`

values are all based on different numbers of observations. To correct for this factor, we divide each `Sum of Square`

value by it’s `Degree of Freedom`

to compute a `Mean Square`

. I’ll demonstrate below for the `Between`

sources of variance row.

```
# Between row calculation
ss = 4.32 # sum of square
df = 2 # degree of freedom
mean_square = ss / df
# Mean Square is 2.16
```

This same process of dividing `Sum of Square`

by `Degrees of Freedom`

can be performed for the `Error`

row and yields a `Mean Square`

of `0.01`

. Given all these pieces we are finally in position to understand our primary statistic, `F`

.

F as a ratio

The `F`

-statistic of a One-Way repeated measures ANOVA table is the ratio of `Mean Square Between`

over `Mean Square Error`

. More meaningfully, we can conceptualize `F`

as the ratio of explainable group-based variance over unexplainable error variance. Imagine the error term frozen at a value of `10`

; as our explainable variation goes up so does our `F`

-statistic.

```
ms_error = 10
F = 1 / ms_error # F = 0.1
F = 10 / ms_error # F = 1
F = 100 / ms_error # F = 10
```

This understanding is what the `F`

-statistic is there to tell us about. Do our groups produce different means than the naive “grand mean” we began with? If so, does the observed difference go beyond what we would expect in a case of pure random chance? To answer that last part we look at the `p`

or `Significance`

value in the rightmost column. As per *t*-tests, the convention is to consider any * p*-value

`< 0.05`

to be statistically significant. Again, we cannot say how big a difference is or even which groups are different based on *alone but we can say that all levels of our independent variable are not the same and therefore something is effecting the dependent variable.*

`p`

*Post-Hoc tests*

A significant ANOVA result can only tell us that all of our independent variable levels do not produce the same means. To get a more refined picture of our results we look to the next two tables of our output.

Describing Our Data

The table of descriptive statistics is useful for getting a general idea of how your experiment turned out. Reported results include `Mean`

, standard deviation (`SD`

), standard error (`SEM`

), and number of observations (`N`

).

Do the direction and spread of group means make sense for your experiment? Imagine an 8-week stress reduction program which is showing an ever increasing amount of stress as participants engage for longer with the program. A significant ANOVA result only tells you there are differences, not that they are in the directions you supposed or even in a pattern that makes sense. What if stress is increases for the first 4 weeks of a program but then ends up lower at the end of 8 weeks than at the beginning?

Did people increasingly dropout of your study as time went on? The exact questions will be informed by your designs but thinking about our descriptives helps us generate hypotheses to explain what happened in our experiment. Statistics are tools to help us answer specific questions about our data but to explain why those differences are or aren’t there is your job.

For this dataset we see cholesterol began at a mean value of `6.41`

, decreased to `5.84`

after 4 weeks of margarine use, and decreased slightly more to `5.78`

after an additional 4 weeks of margarine use. Standard deviations also decreased as participation in the study proceeded (`1.19`

to `1.12`

to `1.10`

).

**Making Inferences About Our Data**

Finally, we come to our post-hoc inferential statistics. Although our mean group values showed a drop in cholesterol as time in the study increased we do not know which of those differences is statistically significant. It is possible the largest observed difference of `0.57`

between the `Before`

and `After4Weeks`

group is statistically significant but the smaller `After4Weeks`

to `After8Weeks`

difference of `0.06`

is not.

Above we see the table of inferential statistics from our analysis. Each row represents a comparison of one group to another. Because we have 3 levels of our independent variable, there are three possible comparisons to make. Each is present in the table of `Post-Hoc Tests`

.

The key values in this table are `Mean Differnce`

, `p value`

, and `Reject`

. From this we can see all of our groups are producing statistically significant differences from one another. Each of our comparisons is reporting `True`

as to whether or not we should reject the null hypotheses and all *p*-values are `<= 0.05`

. Combining this with our descriptive statistics we can say the following:

- Mean Cholesterol levels significantly decreased after 4 weeks of margarine use
- Mean Cholesterol levels significantly decreased between 4 weeks and 8 weeks of margarine use
- Participation in our program lead to statistically significant decreases in mean levels of cholesterol

*Next page:* Analyzing Cholesterol Dataset – Part 3

## Leave a Reply