SharedWeek 6 / Lab 6 -- ANOVA.ipynbOpen in CoCalc

Lab 6: Comparing Three or More Groups with ANOVA

Often, we have to compare data from three or more groups to see if one or more of them is different from the others. To do this, scientists use a statistic called the F- statistic, defined as F=VarbetweenVarwithinF=\frac{Var_{between}}{Var_{within}}. The variability between groups is the sum of the squared differences between the group means and the grand mean. The variability within groups is the sum of the group variances. In LS 40, we will use a variation of the F-statistic that does not require squaring, which is called the F-like statistic.

  1. Import pandas, Numpy and Seaborn.
#TODO
import pandas as pd
import numpy as np
import seaborn as sns

In this lab, we will examine the results of a pharmaceutical company's study comparing the effectiveness of different pain relief medications on migraine headaches. For the experiment, 27 volunteers were selected and 9 were randomly assigned to one of three drug formulations. The subjects were instructed to take the drug during their next migraine headache episode and to report their pain on a scale of 1 = no pain to 10 =extreme pain 30 minutes after taking the drug.

  1. Using the pandas read_csv function, import the file migraines.csv and show the data.
#TODO
migraines=pd.read_csv("migraines.csv")
migraines
Drug A Drug B Drug C
0 4 7 6
1 5 8 7
2 4 4 6
3 2 5 6
4 2 4 7
5 4 6 5
6 3 5 6
7 3 8 5
8 3 7 5
  1. Visualize the data. Use as many different plots as you need to get a sense of the distributions. What are your initial impressions?
#TODO
p=sns.stripplot(data=migraines, palette="PuBu")
p.set(ylabel="Count")
p.set_title("Migraine Treatment Study", fontsize=20)
Text(0.5,1,'Migraine Treatment Study')
p2=sns.violinplot(data=migraines, palette="PuBu", orient="horizontal")
p2.set(ylabel="Count")
p2.set_title("Migraine Treatment Study", fontsize=20)
Text(0.5,1,'Migraine Treatment Study')

Computing the F-like statistic

In the next several exercises, you will compute the F-like statistic for your data.

F-like=naa~G~+nbb~G~+ncc~G~Σaia~+Σbib~+Σcic~\frac{n_{a}|\widetilde{a}-\widetilde{G}|+n_{b}|\widetilde{b}-\widetilde{G}|+n_{c}|\widetilde{c}-\widetilde{G}|}{\Sigma|a_{i}-\widetilde{a}|+\Sigma|b_{i}-\widetilde{b}|+\Sigma|c_{i}-\widetilde{c}|}

Each quantity you compute in this section should be assigned to a variable.

  1. Find the median of each column. HINT: pandas data frames have a built-in function for columnwise medians, so you can just use df.median(), where df is the name of your data frame.
#TODO
druga=migraines["Drug A"]
drugb=migraines["Drug B"]
drugc=migraines["Drug C"]
medians=migraines.median()
medians
Drug A 3.0 Drug B 6.0 Drug C 6.0 dtype: float64
medians[2]
6.0
  1. Find the grand median (the median of the whole sample). HINT: Use np.median.
#TODO
gmedian=np.median(medians)
gmedian
6.0
  1. Find the numerator of the F-like statistic (variation among groups). HINT: When working with data frames and NumPy arrays, you can do computations like addition and multiplication directly, without for loops (unlike in regular Python). Also, Numpy has abs and sum functions.
d=medians-gmedian
d
Drug A -3.0 Drug B 0.0 Drug C 0.0 dtype: float64
#TODO
a=len(druga)
b=len(drugb)
c=len(drugc)
numerator=((a*abs(medians[0]-gmedian))+(b*abs(medians[1]-gmedian))+(c*abs(medians[2]-gmedian)))
numerator=np.sum(len(migraines)*abs(medians-gmedian))
numerator
27.0
  1. Compute the denominator of the F-like statistic. This represents variation within groups. HINT: Numpy and pandas will handle columns automatically.
#TODO
denominator=np.sum(np.sum(abs(migraines-migraines.median())))
denominator
24.0
  1. Compute F-like, which is the ratio variation among groupsvariation within groups\frac{\text{variation among groups}}{\text{variation within groups}}.
#TODO
flikeog=numerator/denominator
flikeog
1.125

Bootstrapping

We now want to find a p-value for our data by simulating the null hypothesis. This, of course, means computing the F-like statistic each time, which takes a lot of code and would make a mess in the bootstrap loop. Instead, we will package our code into a function and call this function whereever necessary.

  1. Write a function that will compute the F-like statistic for this dataset or one of the same size.
#TODO
def fstat(df):
    medians=df.median()
    gmedian=np.median(df)
    numerator=np.sum(len(df)*abs(medians-gmedian))
    denominator=np.sum(np.sum(abs(df-df.median())))
    Flike=numerator/denominator
    return Flike

We now want to simulate the null hypothesis that there is no difference between the groups. To do this, we have to make all the data into one dataset, sample pseudo-groups from it, and compute the F-like statistic for the resampled data.

  1. Use the code alldata = np.concatenate([migraine["Drug A"], migraine["Drug B"], migraine["Drug C"]]) (this assumes your data frame is called "migraine") to put all the data into one 1-D array.
#TODO
alldata=np.concatenate([migraines["Drug A"], migraines["Drug B"], migraines["Drug C"]])
  1. Sample three groups of the appropriate size from alldata. Assign each to a variable.
#TODO
a=np.random.choice(alldata,len(migraines))
b=np.random.choice(alldata,len(migraines))
c=np.random.choice(alldata,len(migraines))
  1. Make the three samples into a data frame. To do this, use the NumPy function column_stack to put the 1-D arrays side by side and then use the pandas function DataFrame to convert the result into a data frame.
#TODO
data=pd.DataFrame(np.column_stack([a,b,c]))
  1. Compute the F-like statistic for your resampled data.
#TODO
fstat(df=data)
0.9090909090909091
  1. Do the above steps 10,000 times to simulate the null hypothesis, storing the results.
#TODO
flikelist=np.zeros(10000)
alldata=np.concatenate([migraines["Drug A"], migraines["Drug B"], migraines["Drug C"]])
for i in range(10000):
    a=np.random.choice(alldata,len(migraines))
    b=np.random.choice(alldata,len(migraines))
    c=np.random.choice(alldata,len(migraines))
    data=pd.DataFrame(np.column_stack([a,b,c]))
    flikelist[i]=fstat(data)
flikelist
array([0.62068966, 0.5 , 0.64285714, ..., 0.54545455, 0.39130435, 0.36 ])
plot=sns.distplot(flikelist, kde=False, color="pink")
plot.set(xlabel="Flike", ylabel="Count")
plot.axvline(flikeog,color="red")
<matplotlib.lines.Line2D at 0x7f4a5840fb00>
  1. Find the p-value for your data. What do you conclude about the migraine treatments? Use α\alpha=0.05 for NHST.
# TODO
d1=np.sum(flikelist>=flikeog)
pvalue=(d1)/10000
pvalue
0.0095

Post Hoc Analysis

Having obtained a significant result from the omnibus test, we can now try to track down the source of the significance. Which groups are actually different from which?

Perform pairwise two-group comparisons and record the p-value for each.

# TODO
medianlistdiff=[]
medarray=np.zeros(10000)
for i in range(10000):
    e=np.random.choice(migraines,len(migraines))
    g=np.random.choice(migraines,len(migraines))
    mediandiff=(np.median(e)-np.median(g))
    medarray[i]=mediandiff
medarray

If more than one comparison comes out significant at the \alpha=0.05α=0.05 level, divide \alphaα by the number of significant tests. Which ones are still significant?

# TODO

Write a sentence or two describing your findings.

#TODO