Lab 5: Python Tricks and More Two Groups Comparisons
In previous labs, the data you worked with was given to you in the form of lists inside the assignment notebook. This works well for simple data but becomes messy with larger or more complex datasets. It's also not how the data you may work with in real life comes packaged.
In this lab, you will learn the basics of using pandas, a common and powerful Python data analysis library that works well with Seaborn and Numpy. The usual abbreviation for importing pandas is "pd".
One of the most useful things we can do with pandas is read files into a Jupyter notebook (or other Python code). This produces a table-like object called a pandas data frame.
Our data is stored in a common format called CSV (comma-separated variable), so we will use the pandas read_csv function to read it. The basic syntax is pd.read_csv("filename").
In this lab, you will look at a dataset of blood loss (in mL) in patients having one of two types of surgery.
Read the blood_loss.csv file into your notebook and assign it to a variable. View the resulting object. HINT: Don't use print. It makes the data frame look worse.
Notice that, at the end of the second column, there is an "NaN". This stands for "not a number" and is there because the number of observations in the two columns is not the same. We will have to deal with this later, but for now, it's not an issue.
Making Pretty Plots
As usual, we want to start by visualizing the data. Pandas works well with Seaborn plotting tools. For example, to make side-by-side dotplots of a dataframe called df, you just need to enter sns.stripplot(data=df).
Make a beeswarm plot of the blood loss data. Make sure the y-axis is appropriately labeled.
#2p=sns.swarmplot(data=blood_loss)p.set(xlabel="Treatment Group")+p.set(ylabel="Blood Loss (mL)");
Notice that Seaborn automatically labels the categories using the column labels from your original data frame. You can change these if you want, but we'll stay with the originals.
A good figure should have a title. To set one, use the syntax p.set_title("Title") (where p is the name of your plot). You can also use the fontsize option to set the title font size.
Add a descriptive title to your plot. Make it reasonably large.
#3p=sns.swarmplot(data=blood_loss)p.set(xlabel="Treatment Group")+p.set(ylabel="Blood Loss (mL)");p.set_title("Blood Loss When Using Two Surgerical Techniques");
So far, we have used the default Seaborn colors. However, there are plenty of other options. We can set colors manually or, better, choose one of the palettes that Seaborn makes available. Since this dataset is about blood, we might want to use shades of red. To do this, just put palette="Reds" into your plot command.
#4p=sns.swarmplot(data=blood_loss,palette="Reds")p.set(xlabel="Treatment Group")+p.set(ylabel="Blood Loss (mL)");p.set_title("Blood Loss When Using Two Surgerical Techniques");
Make a different type of visualization of this data. Label the axes, make a title, and use a color scheme you like.
#5p=sns.violinplot(data=blood_loss,palette="Reds")p.set(xlabel="Treatment Group")+p.set(ylabel="Blood Loss (mL)");p.set_title("Blood Loss When Using Two Surgerical Techniques");
The two types of surgery seem to have different levels of blood loss. We want to find out what the difference is and get a measure of the associated uncertainty. We will do this by bootstrapping, but first a few technicalities require our attention.
The data in question is in a pandas data frame. For many purposes, this is a good thing, but in this case, the NaN at the end of the second column would cause problems. Workarounds are possible but would be somewhat clumsy. We will therefore just make the two columns into lists and proceed as in earlier labs.
To acccess a column in a pandas data frame, just use the column's title. To access the column "Col 1" in the data frame df, use df["Col 1"]. We can then use the list function to convert the column into a list.
Make each column into a list, assigning each to a variable. View the lists.
We're almost done, but the list for Treatment 2 still has that pesky NaN. Since it's at the end, the easiest way to get rid of it is to get all the other list elements and assign the new list to the same variable as the old one.
Make a list without the NaN. HINT: You may want to review indexing.
We are now ready to compute confidence intervals and p-values.
Referring back to the visualizations you made earlier, pick a descriptor for the data and compute it for both treatments. Briefly justify your choice.
#8Median_1=np.median(Treatment1)Median_2=np.median(Treatment2)display("Median of Treatment 1 List",Median_1,"Median of Treatment 2 List",Median_2)
'Median of Treatment 1 List'
'Median of Treatment 2 List'
Pick a measure you want to compare about the two groups. Most likely, that is a measure of central tendency or variation, but you could use something else. Find your observed difference.
#9MAD_set1=MAD_set2=absolute1=absolute2=MAD_1=MAD_2=differenceMedian=Median_1-Median_2foriinTreatment1:MAD_set1.append(i-Median_1)foriinTreatment2:MAD_set2.append(i-Median_2)absolute1.append(np.abs(MAD_set1))absolute2.append(np.abs(MAD_set2))MAD_1.append(np.median(absolute1))MAD_2.append(np.median(absolute2))display("MAD for Treatment 1 List",MAD_1,"MAD for Treatment 2 List",MAD_2,"Observed Difference",differenceMedian)
'MAD for Treatment 1 List'
'MAD for Treatment 2 List'
Using 10,000 bootstrap replicates, find the 99% pivotal confidence interval for the difference.
#10total=10000treatment_difference=np.zeros(total)foriinrange(total):Random_Treatment1=np.random.choice(Treatment1,len(Treatment1))Random_Treatment2=np.random.choice(Treatment2,len(Treatment2))difference=np.median(Random_Treatment1)-np.median(Random_Treatment2)treatment_difference[i]=differencetreatment_difference.sort()M_lower=treatment_differenceM_upper=treatment_differenceM_observed=np.median(treatment_difference)M_upper_pivotal=2*M_observed-M_lowerM_lower_pivotal=2*M_observed-M_upperdisplay("M Pivotal(Lower): Red Line",M_lower_pivotal,"M Pivotal(Upper): Red Line",M_upper_pivotal)p=sns.distplot(treatment_difference,kde=False,axlabel="Difference in Treatment 1 and Treatment 2 Medians")p.set(ylabel="Count")p.axvline(M_lower_pivotal,color="red");p.axvline(M_upper_pivotal,color="red");p.axvline(M_observed,color="blue");
'M Pivotal(Lower): Red Line'
'M Pivotal(Upper): Red Line'
Write a sentence interpreting your effect size and confidence interval in the context of the study.
Since the confidence interval of our effect size doesn't contain our null hypothesis value of 0, we can say that our result is statistically significant. So this tells us that there is a statistically significant difference between the medians from treatment 1 and treatment 2.
We can also find a p-value for the observed difference. Since Treatment 1 seems to have considerably more variability than Treatment 2, even after excluding outliers, the two-box method makes sense.
Using the two-box method, find the two-sided p-value for the observed difference. HINT: There are two ways to recenter your data: you can use a for loop or convert the list to a Numpy array and just subtract.
#12Treatment1_centered=np.array(Treatment1)-Median_1Treatment2_centered=np.array(Treatment2)-Median_2total=10000treatment_difference=np.zeros(total)differenceMedian=Median_1-Median_2other_limit=-differenceMedianforiinrange(total):Random_Treatment1=np.random.choice(Treatment1_centered,len(Treatment1_centered))Random_Treatment2=np.random.choice(Treatment2_centered,len(Treatment2_centered))treatment_difference[i]=np.median(Random_Treatment1)-np.median(Random_Treatment2)p=sns.distplot(treatment_difference,kde=False,axlabel="Difference in Treatment 1 and Treatment 2 Medians")p.set(ylabel="Count")p.axvline(differenceMedian,color="red");p.axvline(other_limit,color="red");p.axvline(np.median(treatment_difference),color="blue");p.set_title("Difference in Median Blood Loss When Using Two Surgerical Techniques");pvalue=(sum(treatment_difference>=differenceMedian)+sum(treatment_difference<=other_limit))/totaldisplay("P Value",pvalue)
Write a sentence interpreting the p-value.
#13 Since the p-value is 0, which is below our alpha value of 0.01, we can say that our result is statistically significant. In fact, we got 0 for our p-value which means that there is a 0% chance that this result is due to random chance which further shows the significance of our results (we fail to reject the null hypothesis).