---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-4c13531e2ad1> in <module>()
59 output_node = np.array([hid_nod, hidden_output_weights])
60
---> 61 output = Integer(1) / (Integer(1) + math.exp(-output_node))
62
63 print (output)
TypeError: bad operand type for unary -: 'list'
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Note: The data in reviews.txt
we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The
, the
, and THE
, all the same way.
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-3-5ba910772d99> in <module>()
1 items = [Integer(1), Integer(2), Integer(3), Integer(4), Integer(5)]
----> 2 y = lambda a: a + Integer(10), y(Integer(5))
3
4 #print(y)
5
NameError: name 'y' is not defined
Project 1: Quick Theory Validation
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
We'll create three Counter
objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ')
to divide a piece of text (such as a review) into individual words. If you use split()
instead, you'll get slightly different results than what the videos and solutions show.
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios
.
Hint: the positive-to-negative ratio for a given word can be calculated with
positive_counts[word] / float(negative_counts[word]+1)
. Notice the+1
in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-15-c13763d7f9e3> in <module>()
1 # Create Counter object to store positive/negative ratios
----> 2 pos_neg_ratios = Counter()
3
4 # TODO: Calculate the ratios of positive and negative uses of the most common words
5 # Consider words to be "common" if they've been used at least 100 times
NameError: name 'Counter' is not defined
Examine the ratios you've calculated for a few words:
Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The
+1
we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio)
)
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
Examine the new ratios you've calculated for the same words from before:
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1
, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1
. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common())
.)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0
, words will get more positive as their ratios approach and go above 1
, and words will get more negative as their ratios approach and go below -1
. That's why we decided to use the logs instead of the raw ratios.
Project 2: Creating the Input/Output Data
TODO: Create a set named vocab
that contains every word in the vocabulary.
Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0
is the input layer, layer_1
is a hidden layer, and layer_2
is the output layer.
TODO: Create a numpy array called layer_0
and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0
as a 2-dimensional matrix with 1 row and vocab_size
columns.
Run the following cell. It should display (1, 74074)
layer_0
contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
TODO: Complete the implementation of update_input_layer
. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside layer_0
.
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-12-7a10af6876aa> in <module>()
17
18
---> 19 update_input_layer(reviews[Integer(0)])
<ipython-input-12-7a10af6876aa> in update_input_layer(review)
6 global layer_0
7 # clear out previous state by resetting the layer to be all 0s
----> 8 layer_0 *= Integer(0)
9
10 for word in review.split(" "):
NameError: global name 'layer_0' is not defined
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0
.
TODO: Complete the implementation of get_target_for_labels
. It should return 0
or 1
, depending on whether the given label is NEGATIVE
or POSITIVE
, respectively.
Run the following two cells. They should print out'POSITIVE'
and 1
, respectively.
Run the following two cells. They should print out 'NEGATIVE'
and 0
, respectively.
End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
TODO: We've included the framework of a class called SentimentNetork
. Implement all of the items marked TODO
in the code. These include doing the following:
Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
Re-use the code from earlier in this notebook to create the training data (see
TODO
s in the code)Implement the
pre_process_data
function to create the vocabulary for our training data generating functionsEnsure
train
trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-4-ae2dac294736> in <module>()
56 hidden_output_weights = np.array([Integer(1),Integer(1),Integer(2),Integer(2),Integer(2)])
57
---> 58 output_node = np.array(hid_nod, hidden_output_weights)
59
60 output = Integer(1) / (Integer(1) + e**(-output_node))
NameError: name 'hid_nod' is not defined
File "<ipython-input-19-027eaa6a79d8>", line 116
global = self.layer_0
^
SyntaxError: invalid syntax
File "<ipython-input-20-ffb3870283c4>", line 2
def __init__(self, reviews, labels, hidden_nodes = Integer(10), learning_rate = RealNumber('0.1'))
^
SyntaxError: invalid syntax
Run the following cell to create a SentimentNetwork
that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1
.
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-21-dce77304de86> in <module>()
----> 1 mlp = SentimentNetwork(reviews[:-Integer(1000)],labels[:-Integer(1000)], learning_rate=RealNumber('0.1'))
NameError: name 'SentimentNetwork' is not defined
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01
, and then train the new network.
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001
, and then train the new network.
With a learning rate of 0.001
, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 4: Reducing Noise in Our Input Data
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
Copy the
SentimentNetwork
class you created earlier into the following cell.Modify
update_input_layer
so it does not count how many times each word is used, but rather just stores whether or not a word was used.
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1
.
That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Project 5: Making our Network More Efficient
TODO: Make the SentimentNetwork
class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
Copy the
SentimentNetwork
class from the previous project into the following cell.Remove the
update_input_layer
function - you will not need it in this version.Modify
init_network
:
You no longer need a separate input layer, so remove any mention of
self.layer_0
You will be dealing with the old hidden layer more directly, so create
self.layer_1
, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify
train
:
Change the name of the input parameter
training_reviews
totraining_reviews_raw
. This will help with the next step.At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from
word2index
) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a locallist
variable namedtraining_reviews
that should contain alist
for each review intraining_reviews_raw
. Those lists should contain the indices for words found in the review.Remove call to
update_input_layer
Use
self
'slayer_1
instead of a locallayer_1
object.In the forward pass, replace the code that updates
layer_1
with new logic that only adds the weights for the indices used in the review.When updating
weights_0_1
, only update the individual weights that were used in the forward pass.
Modify
run
:
Remove call to
update_input_layer
Use
self
'slayer_1
instead of a locallayer_1
object.Much like you did in
train
, you will need to pre-process thereview
so you can work with word indices, then updatelayer_1
by adding weights for the indices used in the review.
Run the following cell to recreate the network and train it once again.
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Project 6: Reducing Noise by Strategically Reducing the Vocabulary
TODO: Improve SentimentNetwork
's performance by reducing more noise in the vocabulary. Specifically, do the following:
Copy the
SentimentNetwork
class from the previous project into the following cell.Modify
pre_process_data
:
Add two additional parameters:
min_count
andpolarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than
min_count
times.Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least
polarity_cutoff
Modify
__init__
:
Add the same two parameters (
min_count
andpolarity_cutoff
) and use them when you callpre_process_data
Run the following cell to train your network with a small polarity cutoff.
And run the following cell to test it's performance. It should be
Run the following cell to train your network with a much larger polarity cutoff.
And run the following cell to test it's performance.