I present here an example final project. This project is not intended to be a carbon-copy template for your project, but rather it should give you and idea of the general shape and style of work that I am looking for. Note that I am following the general outline and instructions for this sample project.
## Library Imports
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.covariance import EmpiricalCovariance
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from time import process_time
import seaborn as sns
sns.set_style("white")
I am using data from the city of Los Angeles originally found from this website.
The dataset has four columns of interest:
The raw data has another column (Value Date) that we will not be using. I first examine the raw data.
dfv1 = pd.read_csv("Water_and_Electric_Usage_from_2005_-_2013.csv")
dfv1.head(2)
Before moving forward, there are some obvious data cleaning steps that need to be taken. I change the date from text to a datetime object and extract the 5-digit zip code from the Zip Code column. Finally, I drop the unneeded columns from the dataframe.
dfv1['Date'] = pd.to_datetime(dfv1["Text Date"], format="%b_%Y")
dfv1['Zip'] = dfv1['Zip Code'].str.extract("(.*)\n", expand=True).astype('int')
dfv2 = dfv1.drop(['Text Date', 'Value Date','Zip Code'],axis=1)
dfv2.head(2)
The goal of this project is to predict the water and power use. I plot the totals for each date.
dfsum = dfv2.groupby('Date').sum()
dfsum.reset_index(inplace=True)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.plot(dfsum['Date'], dfsum['Water Use'], marker='o', linestyle='', ms=8)
ax1.set_ylabel('Water Use (HCF)')
ax2.plot(dfsum['Date'], dfsum['Power Use'], marker='o', linestyle='', ms=8)
plt.xlabel('date')
ax2.set_ylabel('Power Use (kWh)')
plt.tight_layout()
There is an extra point at the end that is separated from the rest of the dataset. I look specifically for times after Jan 1, 2013.
dfsum[dfsum['Date'] > pd.Timestamp('2013-01-01')]
There are missing data points between July 2013 and November 2013. I cut the data off at June 30, 2013.
dfv3 = dfv2[dfv2['Date']<pd.Timestamp('2013-07-01')]
dfsum = dfv3.groupby('Date').sum()
dfsum.reset_index(inplace=True)
Before moving forward, it looks like there was a major change in water usage around the mid-point of 2012. However, when I look at the data grouped by zip code, I see that the drop is due to a change in the number of reported zip codes that happened in the middle of 2012.
# Group data by zip code
groups = dfv3.groupby('Zip')
# Plot
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.margins(0.05) # Optional, just adds 5% padding to the autoscaling
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax1.plot(group['Date'], group['Water Use'], marker='o', linestyle='', ms=8, label=name)
#ax.set_aspect(1)
#break
#ax.legend(bbox_to_anchor=(1,0.5))
ax1.set_ylabel('Water Use')
ax2.margins(0.05) # Optional, just adds 5% padding to the autoscaling
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax2.plot(group['Date'], group['Power Use'], marker='o', linestyle='', ms=8, label=name)
#ax.set_aspect(1)
#break
#ax.legend(bbox_to_anchor=(1,0.5))
plt.xlabel('date')
ax2.set_ylabel('Power Use')
plt.tight_layout()
Because my fits will be by zip code, this should not be an issue. However, there are a handfull of possible outliers in the water use data.
I utilize the Mahalanobis distance to see how far from the average of the data the potential outliers are.
# Measure the mahalanobis distance.
X=dfv3[['Water Use']].values
emp_cov = EmpiricalCovariance().fit(X)
mahal_dist = np.sqrt(emp_cov.mahalanobis(X))
# Visualize the results
plt.plot(mahal_dist,marker='o',linestyle='')
plt.ylabel('Mahalanobis Distance')
There are about 8 points that stand out as being very far from the rest of the 12,000 points (over 15 standard deviations). All of the points occur in two short time periods in the same zip code.
dfv3["M_dist"] = mahal_dist
dfv3[dfv3["M_dist"]>15].sort_values('Date')
There may have been something unusual happening during those time periods. I drop these points from the dataset as error outliers.
dfv4 = dfv3[dfv3["M_dist"]<15].drop('M_dist',axis=1).reset_index(drop=True)
dfv4.head(2)
There is a definite month/year periodicity in the data. I add month and year data features to the model. I visualize the data to and see that the periodicity is clearly visible in both the water use and the power use data.
dfv4['Month'] = dfv4['Date'].apply(lambda x: x.month)
dfv4['Year'] = dfv4['Date'].apply(lambda x: x.year)
dfsum = dfv4.groupby('Month').sum()
dfsum.reset_index(inplace=True)
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.plot(dfsum['Month'], dfsum['Water Use'], marker='o', linestyle='', ms=8)
ax1.set_ylabel('Water Use (HCF)')
ax2.plot(dfsum['Month'], dfsum['Power Use'], marker='o', linestyle='', ms=8)
plt.xlabel('month')
ax2.set_ylabel('Power Use (kWh)')
plt.tight_layout()
Both the water and the power usage go up in the summer months, which makes sense. There is also a power spike in January that may be related to heating on cold days.
I hypothesize that the water and power useage depends on the following additional factors: 1) Population of each zip code 2) Economic status of each zip code (weath distribution) 3) Weather data
I join the original dataset with three additonal datasets to add these features.
The population data was retrieved from the US Census Bureau.
https://www.census.gov/geo/maps-data/data/zcta_rel_download.html
The dataset includes information about the population, the land area, and the total area for each zip code. There are a number of other columns of data that I will not use. I convert the zip code column (ZCTA5) to an integer in order to match the datatype of my power/water usage dataframe zip code.
dfzip = pd.read_csv("zcta_county_rel_10.txt",dtype={'ZCTA5':'object'})
dfzip['ZCTA5'] = dfzip['ZCTA5'].astype(int)
dfzip.head(2)
Because the data in this set are intended to merge other datasets, there are multiple entries for each zip code. The "Z" columns are totals for each zip code, so I only need one of them - I use the max
aggregation. I cut the unneeded columns, keeping only the zip code, population, total area, and land area columns.
dfzipgroups = (dfzip[['ZCTA5','ZPOP','ZAREA','ZAREALAND']].groupby("ZCTA5").max())
dfzipgroups.reset_index(inplace=True)
dfzipgroups.head(2)
To get a sense of the dataset, I look at the relationship between the land area and the population.
dfzipgroups.plot.scatter(x='ZAREALAND',y='ZPOP')
There is an interesting anti-correlation here. But perhaps that isn't too surprising, as larger population densities will have more zip codes associated with them, so most of the data would be in small land area, large populations.
Now I merge this will the resource use database, keeping only the columns that I need after the join.
dfv5 = pd.merge(dfv4,dfzipgroups,left_on="Zip",right_on="ZCTA5")
print("{} rows lost in data merge.".format(len(dfv4.index)-len(dfv5.index)))
dfv5.drop(['ZCTA5'],axis=1,inplace=True)
dfv5.head(2)
There were a number of rows lost in the merge- these were zip codes that were not in the US Census database.
I graph the new inputs to see how the Water and Power use depend on them.
f, (ax1, ax2) = plt.subplots(2, 3, sharey=True)
ax1[0].plot(dfv5['ZPOP'], dfv5['Water Use'], marker='o', linestyle='', ms=8)
ax1[0].set_ylabel('Water Use (HCF)')
ax2[0].plot(dfv5['ZPOP'], dfv5['Power Use'], marker='o', linestyle='', ms=8)
ax2[0].set_ylabel('Power Use (kWh)')
ax2[0].set_xlabel('Population')
ax1[1].plot(dfv5['ZAREA'], dfv5['Water Use'], marker='o', linestyle='', ms=8)
ax2[1].plot(dfv5['ZAREA'], dfv5['Power Use'], marker='o', linestyle='', ms=8)
ax2[1].set_xlabel('Area')
ax1[2].plot(dfv5['ZAREALAND'], dfv5['Water Use'], marker='o', linestyle='', ms=8)
ax2[2].plot(dfv5['ZAREALAND'], dfv5['Power Use'], marker='o', linestyle='', ms=8)
ax2[2].set_xlabel('Land Area')
plt.tight_layout()
It looks like the power use is more strongly correlation with the census and land area data.
I add the economic data which is based on IRS tax records:
https://www.irs.gov/uac/soi-tax-stats-individual-income-tax-statistics-zip-code-data-soi
This dataset has the following variables I am interested in:
However, the dataset is originally an Excel spreadsheet with multiple sheet names. I created a short function to retrieve the tax data I want.
def getTaxYear(year):
# This function reads in a single sheet from the Excel document, then returns it as a dataframe.
dftemp = pd.read_excel("allCAtaxdata2005-2013.xlsx",sheetname='{}'.format(year),thousands=",",na_values=["*","* ",".","-"])
dftemp['Zip Code'] = dftemp['Zip Code'].astype('int')
dftemp['Year'] = year
return dftemp
dftaxes = pd.concat([getTaxYear(x) for x in range(2005,2014)])
# Drop NaNs from data
dftaxes.dropna(inplace=True)
dftaxes.head(2)
I visualize the economic data in the following plot.
dfgroup = dftaxes.groupby('Year').sum()
dfgroup.reset_index(inplace=True)
plt.plot(dfgroup['Year'],dfgroup['AGI'],label='AGI')
plt.plot(dfgroup['Year'],dfgroup['SW'],label='SW')
plt.plot(dfgroup['Year'],dfgroup['EIC'],label='EIC')
plt.legend(bbox_to_anchor=(1.2,0.5))
plt.gca().set_yscale('log')
plt.gca().get_xaxis().get_major_formatter().set_useOffset(False)
plt.gcf().autofmt_xdate()
plt.ylabel('IRS Data (1000$)')
I merge the IRS dataset with the population and usage dataset.
dfv6 = pd.merge(dfv5,dftaxes,how="inner",left_on=["Zip","Year"],right_on=["Zip Code","Year"])
print("{} rows lost in data merge.".format(len(dfv5.index)-len(dfv6.index)))
dfv6.drop('Zip Code',axis=1,inplace=True)
dfv6.head(2)
There is an additional data loss in this merge. However, I still have most of my original dataset.
print("Remaining data (from original dataset): {0:.1f}%".format(float(len(dfv6.index))/(len(dfv1.index))*100))
I visualize the combined dataset.
# Plot usage vs Adjusted Gross Income
f, (ax1, ax2) = plt.subplots(2, 2, sharey=True)
ax1[0].scatter(dfv6['AGI'],dfv6['Water Use'])
ax1[0].set_ylabel("Water Use")
ax2[0].scatter(dfv6['AGI'],dfv6['Power Use'])
ax2[0].set_ylabel("Power Use")
ax2[0].set_xlabel("Adjusted Gross Income")
ax2[0].set_xlim(0,max(dfv6['AGI']))
ax1[1].scatter(dfv6['EIC'],dfv6['Water Use'])
ax2[1].scatter(dfv6['EIC'],dfv6['Power Use'])
ax2[1].set_xlabel("Earned Income Credit")
ax2[1].set_xlim(0,max(dfv6['EIC']))
Again, there aren't strong correlations here, but the correlations that exist may improve the model. I am also interested in how closely this IRS dataset tracks the US Census dataset. The number of tax returns in a zip code should be close to the number of people living in that area. I plot the number of returns versus the census population to check this correlation.
#We'll use a different tool to plot the data now that we know how to group the data by a category. This will help us make better combined plots later on.
groups = dfv6.groupby('Year')
# Plot
trainfig, ax = plt.subplots()
#ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
colors = iter(plt.cm.rainbow(np.linspace(0, 1, len(groups))))
for name, group in groups:
ax.plot(group['ZPOP'],group['Nreturns'],'o',label=name,color=next(colors))
ax.legend(bbox_to_anchor=(1,0.5))
ax.set_ylabel('N Returns')
ax.set_xlabel('2010 Cenus Population')
The correlation is as expected. Howver, there is also a trend based on the year which probably corresponds to the growth and decline in populations.
I add historical weather data to my input features. This dataset comes from:
The dataset has the following variables:
The weather data is essentially the same for all of the zip codes in this dataset- they are all in the city of Los Angeles. Any small variation in the rain or temperature will be negligible because I only have the average over the full month of data.
weatherdf = pd.read_csv("LAX_weather.csv",parse_dates=[2])
weatherdf.fillna(0.0, inplace=True)
weatherdf['Date']=pd.to_datetime(weatherdf['DATE'])
weatherdf.drop(['STATION', 'NAME','DATE'],axis=1, inplace=True)
weatherdf.head(2)
# Plot weather data
f, (ax1, ax2) = plt.subplots(2, 3, sharex=True)
ax1[0].plot(weatherdf['Date'],weatherdf['AWND'])
ax1[0].set_ylabel("AWND")
ax2[0].plot(weatherdf['Date'],weatherdf['CLDD'])
ax2[0].set_ylabel("CLDD")
ax1[1].plot(weatherdf['Date'],weatherdf['HTDD'])
ax1[1].set_ylabel("HTDD")
ax2[1].plot(weatherdf['Date'],weatherdf['PRCP'])
ax2[1].set_ylabel("PRCP")
ax1[2].plot(weatherdf['Date'],weatherdf['TAVG'])
ax1[2].set_ylabel("TAVG")
ax2[2].plot(weatherdf['Date'],weatherdf['TSUN'])
ax2[2].set_ylabel("TSUN")
plt.tight_layout()
plt.gcf().autofmt_xdate()
I merge this last dataset with my other features.
dfv7 = pd.merge(dfv6, weatherdf, on='Date')
print("{} rows lost in data merge.".format(len(dfv6.index)-len(dfv7.index)))
dfv7.drop(['TSUN'],axis=1,inplace=True)
dfv7.head(2)
There were weather data for all of the existing rows, so there wasn't any data lost in this step. I visualize the weather data and the water/power use data together to look for correlations.
# Plot usage vs Adjusted Gross Income
f, (ax1, ax2) = plt.subplots(2, 2, sharey=True)
ax1[0].scatter(dfv7['TAVG'],dfv7['Water Use'])
ax1[0].set_ylabel("Water Use")
ax2[0].scatter(dfv7['TAVG'],dfv7['Power Use'])
ax2[0].set_ylabel("Power Use")
ax2[0].set_xlabel("Average Temp")
ax1[1].scatter(dfv7['PRCP'],dfv7['Water Use'])
ax1[1].set_xlim(0,max(dfv7['PRCP']))
ax2[1].scatter(dfv7['PRCP'],dfv7['Power Use'])
ax2[1].set_xlabel("Precipitation")
ax2[1].set_xlim(0,max(dfv7['PRCP']))
It certainly looks like there are correlations here - at least between the Water Use/Precepitation and the Power Use/Average Temp datasets.
zipdummy = pd.get_dummies(dfv7['Zip'],prefix='Z')
monthdummy = pd.get_dummies(dfv7['Month'],prefix='M')
yeardummy = pd.get_dummies(dfv7['Year'],prefix='Y')
dfv8 = dfv7.join(zipdummy).join(monthdummy).join(yeardummy)
dfv8.drop(['Date','Zip','Month','Year'],inplace=True,axis=1)
dfv8.head(2)
I split my data into 80% training and 20% testing data sets. I do this before doing any further data transformations in order to avoid "data snooping".
train, test = train_test_split(dfv8, test_size=0.2, random_state=23)
Almost all of my data columns are on different scales. I regularize the non-categorical data before fitting. Because only some of the columns need to be regularized, I split the data, regularize part of it, then re-join the training dataset. I do the same transformation to the test dataset.
# Fit the scaler
std_scaler = StandardScaler().fit(train.ix[:,2:14])
# Apply the transformation to the train and test datasets
train_std = pd.DataFrame(std_scaler.transform(train.ix[:,2:14]),columns=train.columns[2:14])
test_std = pd.DataFrame(std_scaler.transform(test.ix[:,2:14]),columns=test.columns[2:14])
# Recombine the scaled datasets
trainToMerge=train.reset_index(drop=True)
train_scaled=trainToMerge.ix[:,0:2].merge(train_std.merge(trainToMerge.ix[:,15:],left_index=True,right_index=True),left_index=True,right_index=True)
testToMerge=test.reset_index(drop=True)
test_scaled=testToMerge.ix[:,0:2].merge(test_std.merge(testToMerge.ix[:,15:],left_index=True,right_index=True),left_index=True,right_index=True)
I now separate out the features and the targets.
train_features = train_scaled.ix[:,2:].values
test_features = test_scaled.ix[:,2:].values
water_target = train_scaled['Water Use'].values
water_actual = test_scaled['Water Use'].values
power_target = train_scaled['Power Use'].values
power_actual = test_scaled['Power Use'].values
I tested reducing the dimensionality of the features using the PCA. However, this did not improve either the model performance or the time it takes to fit the models. So I will not be using PCA for this analysis.
My first model is to simply use the "prior year" data in order to predict the usage. I am using the Root-Mean-Square (RMS) as my metric for model performance. The RMS value gives me an estimate of the prediction error in the same units as the Water/Power use data.
dfoffset = dfv4
dfoffset['Prior Year'] = dfoffset['Year'].apply(lambda year: year - 1)
df_new = dfv4.merge(dfoffset, left_on=['Prior Year', 'Month','Zip'], right_on=['Year','Month','Zip'],suffixes=['','_Prior'])
df_new.drop(['Year_Prior','Prior Year_Prior','Prior Year'],inplace=True,axis=1)
print("{} rows lost in data merge.".format(len(dfv4.index)-len(df_new.index)))
print("Water RMS Error: {0:.3f} for 'Last-year' Model".format( np.sqrt(np.mean( (df_new['Water Use'].values - df_new['Water Use_Prior'].values)**2)) ))
print("Power RMS Error: {0:.3f} for 'Last-year' Model".format( np.sqrt(np.mean( (df_new['Power Use'].values - df_new['Power Use_Prior'].values)**2)) ))
I next create a baseline predictive model using a linear regression using the same metrics
def fitAndPlot(train_features, test_features, model, **extraArgs):
# Create linear regression objects for water and power
water_model= model(**extraArgs)
power_model= model(**extraArgs)
# Timing
start = process_time()
# Fit the data
water_model.fit(train_features,water_target)
power_model.fit(train_features,power_target)
fit_time = process_time() - start
start = process_time()
# Get the predictions
wpredictions = water_model.predict(test_features)
ppredictions = power_model.predict(test_features)
predict_time = process_time() - start
# Plot the actuals and the predictions
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.scatter(water_actual, wpredictions, color='black')
ax1.set_ylabel('Water Model Predictions')
ax1.set_xlabel('Actuals')
ax1.set_xlim(0,max(water_actual))
#Plot the slope=1 line for reference
X=np.linspace(ax1.get_xlim()[0], ax1.get_xlim()[1], 100)
ax1.plot(X,X,linestyle='--')
ax2.scatter(power_actual, ppredictions, color='black')
ax2.set_ylabel('Power Model Predictions')
ax2.set_xlabel('Actuals')
ax2.set_xlim(0,max(power_actual))
#Plot the slope=1 line for reference
X=np.linspace(ax2.get_xlim()[0], ax2.get_xlim()[1], 100)
ax2.plot(X,X,linestyle='--')
plt.tight_layout()
# Get the RMS values
print("Water RMS Error: {0:.3f} for {1}".format( np.sqrt(np.mean((wpredictions - water_actual) ** 2)),model.__name__))
print("Power RMS Error: {0:.3f} for {1}".format( np.sqrt(np.mean((ppredictions - power_actual) ** 2)),model.__name__))
print("Fit Time: {} seconds".format(fit_time))
print("Predict Time: {} seconds".format(predict_time))
return (water_model, power_model)
water_model,power_model = fitAndPlot(train_features, test_features,LinearRegression)
The linear regression model is already performing better than the 'last year' model. I follow this with further regression models.
I next apply the random forest regression using the same train/test data.
water_model,power_model = fitAndPlot(train_features,test_features, RandomForestRegressor, n_estimators=300, min_samples_leaf=10,random_state=32)
The random forest has better performance than the linear model, though it took significantly longer.
I look at two other models. First, the Adaboost ensemble method using the Decision Tree Regressor as the base model.
water_model2,power_model2 = fitAndPlot(train_features, test_features, AdaBoostRegressor, base_estimator=DecisionTreeRegressor(min_samples_leaf=10), n_estimators=300,random_state=32,loss='square')
The performance and timing are both slightly better than the Random forest model. One final model:
water_model3,power_model3 = fitAndPlot(train_features, test_features, GradientBoostingRegressor,min_samples_leaf=10, n_estimators=500,random_state=32,max_depth=200,verbose=1)
The Adaboost model has the best performance for the Water Use data, but the Gradient Boosting regressor has the best performance for the Power Use model. Because they are two separate datasets, I use the best performing model for each one and tune the hyperparameters separately.
I tune the adaboost model hyperparameters, focusing on the minimum number of samples per leaf for the underlying decision tree regressor. I increased the number of estimators to reduce the noise in the trend data.
min_leaf_grid = range(1,41,4)
rms_scores = []
for min_leaf in min_leaf_grid:
# Create linear regression objects for water and power
print("Working min_leaf {}".format(min_leaf))
water_model= AdaBoostRegressor(base_estimator=DecisionTreeRegressor(min_samples_leaf=min_leaf), n_estimators=1200,random_state=32,loss='square')
# Timing
start = process_time()
# Fit the data
water_model.fit(train_features,water_target)
fit_time = process_time() - start
start = process_time()
# Get the predictions
wpredictions = water_model.predict(test_features)
predict_time = process_time() - start
rms_score = np.sqrt(np.mean((wpredictions - water_actual) ** 2))
rms_scores.append(rms_score)
plt.plot(min_leaf_grid, rms_scores,marker='o')
plt.xlabel("Minimum Number of Leaves")
plt.ylabel("RMS Error")
The best model performance comes around a minimum leaf size of 20. The model is overfitting with a smaller minimum leaf size and is underfitting with a larger minimum leaf size. Therefore, I run one final model for the water use data.
final_water_model= AdaBoostRegressor(base_estimator=DecisionTreeRegressor(min_samples_leaf=20), n_estimators=1200,random_state=32,loss='square')
# Timing
start = process_time()
# Fit the data
final_water_model.fit(train_features,water_target)
fit_time = process_time() - start
start = process_time()
# Get the predictions
wpredictions = final_water_model.predict(test_features)
predict_time = process_time() - start
rms_score = np.sqrt(np.mean((wpredictions - water_actual) ** 2))
plt.scatter(water_actual, wpredictions, color='black')
ax1 = plt.gca()
ax1.set_ylabel('Water Model Predictions')
ax1.set_xlabel('Actuals')
ax1.set_xlim(0,max(water_actual))
#Plot the slope=1 line for reference
X=np.linspace(ax1.get_xlim()[0], ax1.get_xlim()[1], 100)
ax1.plot(X,X,linestyle='--')
# Get the RMS values
print("Water RMS Error: {0:.3f} for Adaboost Regressor".format( np.sqrt(np.mean((wpredictions - water_actual) ** 2))))
print("Fit Time: {} seconds".format(fit_time))
print("Predict Time: {} seconds".format(predict_time))
I change to the Gradient Booting regression to optimize a model for predicting the power use. I again focus on tuning the minimum number of leaves hyperparameter.
min_leaf_grid = range(1,81,10)
rms_scores = []
for min_leaf in min_leaf_grid:
# Create linear regression objects for water and power
print("Working min_leaf {}".format(min_leaf))
power_model= GradientBoostingRegressor(min_samples_leaf=min_leaf, n_estimators=500,random_state=32,max_depth=200)
# Timing
start = process_time()
# Fit the data
power_model.fit(train_features,power_target)
fit_time = process_time() - start
start = process_time()
# Get the predictions
ppredictions = power_model.predict(test_features)
predict_time = process_time() - start
rms_score = np.sqrt(np.mean((ppredictions - power_actual) ** 2))
rms_scores.append(rms_score)
plt.plot(min_leaf_grid, rms_scores,marker='o')
plt.xlabel("Minimum Number of Leaves")
plt.ylabel("RMS Error")
plt.plot(min_leaf_grid, rms_scores,marker='o')
plt.xlabel("Minimum Number of Leaves")
plt.ylabel("RMS Error")
Again, there is evidence that, for a minimum leaf size of less than 30, the model is overfitting. There is a soft minimum around 40 before the model underfits the data, so I use that parameter to train a final Power Use model.
final_power_model= GradientBoostingRegressor(min_samples_leaf=40, n_estimators=500,random_state=32,max_depth=200)
# Timing
start = process_time()
# Fit the data
final_power_model.fit(train_features,power_target)
fit_time = process_time() - start
start = process_time()
# Get the predictions
ppredictions = final_power_model.predict(test_features)
predict_time = process_time() - start
plt.scatter(power_actual, ppredictions, color='black')
ax1 = plt.gca()
ax1.set_ylabel('Power Model Predictions')
ax1.set_xlabel('Actuals')
ax1.set_xlim(0,max(power_actual))
#Plot the slope=1 line for reference
X=np.linspace(ax1.get_xlim()[0], ax1.get_xlim()[1], 100)
ax1.plot(X,X,linestyle='--')
# Get the RMS values
print("Power RMS Error: {0:.3f} for Gradient Boosting Regressor".format( np.sqrt(np.mean((ppredictions - power_actual) ** 2))))
print("Fit Time: {} seconds".format(fit_time))
print("Predict Time: {} seconds".format(predict_time))
I investigate the limitations of each model - where do the models disagree from the actual data? I plot the residuals for the Water Use model first.
water_residuals = np.abs(wpredictions - water_actual)
plt.plot(water_residuals,marker='o',linestyle='')
plt.xlabel("Point number")
plt.ylabel("Water Model Residuals")
I investigate all points with a residual greater than 150.
water_poor_fit=dfv7.ix[test.index[water_residuals > 150]]
water_poor_fit['Model Predictions']=wpredictions[water_residuals > 150]
water_poor_fit
Most of these points where the model does a poor job fitting the data come from only a handful of zip codes: 91350, 91201, 91265, and 90211. There may be other events or other unique problems with these locations that caused the model predictions to not match the actuals.
power_residuals = np.abs(ppredictions - power_actual)
plt.plot(power_residuals,marker='o',linestyle='')
plt.xlabel("Point number")
plt.ylabel("Power Model Residuals")
In the power model case, I investigate points where the residuals are greater than 200.
power_poor_fit=dfv7.ix[test.index[power_residuals > 200]]
power_poor_fit['Model Predictions']=ppredictions[power_residuals > 200]
power_poor_fit
The points where the power model fails are not as clearly grouped into a few locations, sizes, or any other obvious grouping. There may be a pattern where the HTDD or the CTDD are larger than typical, but that is not definitive.
Finally, I look at the feature importances for both the water and power use models.
importances = final_water_model.feature_importances_
std = np.std([tree.feature_importances_ for tree in final_water_model.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Water Use Model Feature ranking:")
ntop = 0
for f in range(train_features.shape[1]):
if importances[indices[f]] > 0.01:
ntop += 1
print("{0:d}. {1:s}({4:d}): {2:.2f}({3:.2f})% ".format(f + 1, train_scaled.ix[:,2:].columns[indices[f]], importances[indices[f]]*100,std[indices[f]]*100,indices[f]))
plt.title("Water Use Model Feature importances")
plt.bar(range(train_features.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xlim([-1, 36])
plt.show()
The most important features for determining water use are: the number of people, the level of low-income houses, overall economic status, the size of the zip code (more importantly the land area), and the amount of precipitation. Those all agree with my original hypothesis that population, economics, and weather all play important roles in determining water use.
importances = final_power_model.feature_importances_
std = np.std([tree[0].feature_importances_ for tree in final_power_model.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Power Use Model Feature ranking:")
ntop = 0
for f in range(train_features.shape[1]):
if importances[indices[f]] > 0.01:
ntop += 1
print("{0:d}. {1:s}({4:d}): {2:.2f}({3:.2f})% ".format(f + 1, train_scaled.ix[:,2:].columns[indices[f]], importances[indices[f]]*100,std[indices[f]]*100,indices[f]))
plt.title("Power Use Model Feature importances")
plt.bar(range(train_features.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xlim([-1, 30])
plt.show()
The most important features for the power use model are population, economics, land area, then heating degree days, cooling degree dyas, and average temperature. Again, these agree with my hypothesis.
# Miscellaneous Model Properties
print( "Model First Date: {}".format(dfv4['Date'].min()))
print( "Model Last Date: {}".format(dfv4['Date'].min()))
print("Total Zip Codes: {}".format(len(dfv4['Zip'].unique())))
print("Total Population: {} people".format(dfzipgroups[dfzipgroups['ZCTA5'].isin(dfv4['Zip'].unique())]['ZPOP'].sum()))
I calculated the accuracy of both the water use and the power use models.
nonzero = power_actual>0
power_accuracy = (ppredictions[nonzero] - power_actual[nonzero])/power_actual[nonzero]
x,bins,p= plt.hist(np.abs(power_accuracy),bins=100,normed=1)
plt.xlabel('Prediction Fractional Error: $|$Prediction - Actual$|$/Actual')
plt.ylabel('Fraction of Total Predictions')
for item in p:
item.set_height(item.get_height()/sum(x))
plt.ylim(0,.3)
plt.xlim(0,1)
plt.show()
print("{0:.0f}% of the power use predictions have better than a 95% accuracy".format(float(len(np.extract(np.abs(power_accuracy) < 0.05,power_accuracy)))/len(power_accuracy)*100))
nonzero = water_actual>0
water_accuracy = (wpredictions[nonzero] - water_actual[nonzero])/water_actual[nonzero]
x,bins,p= plt.hist(np.abs(water_accuracy),bins=200,normed=1)
plt.xlabel('Prediction Fractional Error: $|$Prediction - Actual$|$/Actual')
plt.ylabel('Fraction of Total Predictions')
for item in p:
item.set_height(item.get_height()/sum(x))
plt.ylim(0,.3)
plt.xlim(0,1)
plt.show()
print("{0:.0f}% of the water use predictions have better than a 60% accuracy".format(float(len(np.extract(np.abs(water_accuracy) < 0.4,water_accuracy)))/len(water_accuracy)*100))
I built machine learning models to predict the water use (in HCF) and power use (in kWh) for the city of Los Angeles. The models were based on data from July 2005 to June 2013. They cover the water and power usage for 145 different zip codes housing around 4.7 million people.
I compared a very simple initial model (that used the data from the previous year to predict the current month's usage) to machine learning models that utilize:
I measured the model performace using the root-mean-squared of the differece between the model's predictions and the actual usage for approximately 20% of the dataset, reserved as a set of test data.
The initial model had the following RMS errors:
Water Use prediction RMS Error: 62.052 for the 'Last-year' Model
Power Use prediction RMS Error: 139.855 for the 'Last-year' Model
The best machine learning models had significantly improved meaures:
Water Use prediction RMS Error: 31.016 using an Adaboost Regressor
Power Use prediction RMS Error: 45.757 using a Gradient Boosting Regressor
Finally, I measured the relative error of the predictions of the machine learning models. This is calculated as
$\frac{|\textrm{Predicted Usage} - \textrm{Actual Usage}|}{\textrm{Actual Usage}}$.
The power use model performed very well with 68% of the power use predictions scoring better than a 95% accuracy. The water use model did not perform quite as well; however, 72% of the water use predictions had better than a 60% accuracy.