微信扫一扫打赏支持

《python深度学习》笔记---3.4、电影评论分类:二分类问题

《python深度学习》笔记---3.4、电影评论分类:二分类问题

一、总结

一句话总结:

binary_crossentropy损失函数:对于二分类问题的 sigmoid 标量输出,你应该使用 binary_crossentropy 损失函数。
不是越训练越好:随着神经网络在训练数据上的表现越来越好,模型最终会过拟合,并在前所未见的数据 上得到越来越差的结果。一定要一直监控模型在训练集之外的数据上的性能
二分类问题最后一层用sigmoid:对于二分类问题(两个输出类别),网络的最后一层应该是只有一个单元并使用 sigmoid 激活的 Dense 层,网络输出应该是 0~1 范围内的标量,表示概率值
model = tf.keras.models.Sequential() 
model.add(tf.keras.layers.Dense(16, activation='relu', input_shape=(10000,))) 
model.add(tf.keras.layers.Dense(16, activation='relu')) 
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

 

 

1、准备数据常见两种方式?

Embedding层:列表->张量:填充列表,使其具有相同的长度,再将列表转换成形状为 (samples, word_indices) 的整数张量,然后网络第一层使用能处理这种整数张量的层(即 Embedding 层)。
one-hot编码:对列表进行 one-hot 编码,将其转换为 0 和 1 组成的向量。举个例子,序列 [3, 5] 将会 被转换为10 000 维向量,只有索引为3 和 5 的元素是1,其余元素都是0。然后网络第 一层可以用 Dense 层,它能够处理浮点数向量数据。

 

 

2、什么是激活函数?为什么要使用激活函数?

只有线性变换:如果没有 relu 等激活函数(也叫非线性), Dense 层将只包含两个线性运算——点积 和加法:output = dot(W, input) + b这样 Dense 层就只能学习输入数据的线性变换(仿射变换)
线性变换集合非常有限:该层的假设空间是从输 入数据到16 位空间所有可能的线性变换集合。这种假设空间非常有限,无法利用多个表示 层的优势,因为多个线性层堆叠实现的仍是线性运算,添加层数并不会扩展假设空间。
为了更丰富的假设空间:为了得到更丰富的假设空间,从而充分利用多层表示的优势,你需要添加非线性或激 活函数。

 

 

3、概率模型,也就是分类问题,用什么损失函数?

概率值模型用交叉熵:但对于输出概率值的模型,交叉熵(crossentropy)往往是最好 的选择。
交叉熵用于衡量概率分布之间的距离:交叉熵是来自于信息论领域的概念,用于衡量概率分布之间的距离,在这个例子中就 是真实分布与预测值之间的距离。

 

 

4、调用 model.fit() 返回了一个 History 对象?

history是字典对象:这个对象有一个成员 history,它 是一个字典,包含训练过程中的所有数据。我们来看一下。

>>> history_dict = history.history
>>> history_dict.keys()
dict_keys(['val_acc', 'acc', 'val_loss', 'loss'])

 

5、keras读取数据集?

数据集后接load_data方法:(train_data, train_labels),(test_data, test_labels)= tf.keras.datasets.imdb.load_data(num_words=10000)

 

 

6、python字典键值颠倒?

【内嵌for.in循环,key和val颠倒】:reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

 

 

7、python字典get的默认值?

【没找到,默认就是'?'】:reverse_word_index.get(i - 3, '?')

 

 

8、numpy批量赋值:results1 = np.zeros((2, 10)) ?

results1[0,[1,3,5]]=1:也就是将(0,1)、(0,3)、(0,5)的位置置为1

 

 

 

二、电影评论分类:二分类问题

博客对应课程的视频位置:

 

import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

1、读取数据集

In [2]:
(train_data, train_labels),(test_data, test_labels)= tf.keras.datasets.imdb.load_data(num_words=10000) 

参数 num_words=10000 的意思是仅保留训练数据中前10 000 个最常出现的单词。低频单 词将被舍弃。

这样得到的向量数据不会太大,便于处理。

In [3]:
print(type(train_data))
print(type(train_labels))
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
In [4]:
print(train_data.shape)
print(train_labels.shape)
(25000,)
(25000,)
In [5]:
print(train_data[0:2])
print(train_labels[0:2])
[list([1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32])
 list([1, 194, 1153, 194, 8255, 78, 228, 5, 6, 1463, 4369, 5012, 134, 26, 4, 715, 8, 118, 1634, 14, 394, 20, 13, 119, 954, 189, 102, 5, 207, 110, 3103, 21, 14, 69, 188, 8, 30, 23, 7, 4, 249, 126, 93, 4, 114, 9, 2300, 1523, 5, 647, 4, 116, 9, 35, 8163, 4, 229, 9, 340, 1322, 4, 118, 9, 4, 130, 4901, 19, 4, 1002, 5, 89, 29, 952, 46, 37, 4, 455, 9, 45, 43, 38, 1543, 1905, 398, 4, 1649, 26, 6853, 5, 163, 11, 3215, 2, 4, 1153, 9, 194, 775, 7, 8255, 2, 349, 2637, 148, 605, 2, 8003, 15, 123, 125, 68, 2, 6853, 15, 349, 165, 4362, 98, 5, 4, 228, 9, 43, 2, 1157, 15, 299, 120, 5, 120, 174, 11, 220, 175, 136, 50, 9, 4373, 228, 8255, 5, 2, 656, 245, 2350, 5, 4, 9837, 131, 152, 491, 18, 2, 32, 7464, 1212, 14, 9, 6, 371, 78, 22, 625, 64, 1382, 9, 8, 168, 145, 23, 4, 1690, 15, 16, 4, 1355, 5, 28, 6, 52, 154, 462, 33, 89, 78, 285, 16, 145, 95])]
[1 0]

由于限定为前 10 000 个最常见的单词,单词索引都不会超过 10 000。

In [6]:
max([max(sequence) for sequence in train_data]) 
Out[6]:
9999
In [7]:
word_index = tf.keras.datasets.imdb.get_word_index() 
# 键值颠倒,将整数 索引映射为单词
reverse_word_index = dict(     
    [(value, key) for (key, value) in word_index.items()]) 
# 将评论解码。注意,索引减去了3,因为0、1、2 是为
# “padding”(填充)、“ start of sequence”(序 列开始)、
# “unknown”(未知词)分别保留的索引
decoded_review = ' '.join(     
    [reverse_word_index.get(i - 3, '?') for i in train_data[0]])
In [8]:
print(decoded_review)
? this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert ? is an amazing actor and now the same being director ? father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for ? and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also ? to the two little boy's that played the ? of norman and paul they were just brilliant children are often left out of the ? list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all
In [ ]:
print(type(word_index))
In [ ]:
for item in word_index.items():
    if(item[1]<=20):
        print(item,item[0],item[1])

2、拆分数据集(拆分成训练数据集和测试数据集)

因为评论文字都是用数字来表示,而这些数字没有大小的意义,所以选择one_hot编码

所以无论训练集,还是测试集,无论是data还是label,都one_hot编码一下就好

(train_data, train_labels),(test_data, test_labels)

In [ ]:
train_data=np.array(train_data)
In [ ]:
print(type(train_data))
print(train_data.shape)
In [ ]:
print(train_data[0])
print(type(train_data[0]))
In [ ]:
print(len(train_data[0]))
print(len(train_data[1]))
print(len(train_data[2]))
print(len(train_data[3]))
In [ ]:
max_review_len=200
In [ ]:
train_data = tf.keras.preprocessing.sequence.pad_sequences(train_data, maxlen=max_review_len)
In [ ]:
train_data = np.concatenate(train_data, axis=0) 
print(train_data[0].shape)
print(train_data[1].shape)
print(train_data[2].shape)
print(train_data[3].shape)
In [ ]:
train_data=tf.one_hot(train_data,depth=10000)
test_data=tf.one_hot(test_data,depth=10000)
train_labels=tf.one_hot(train_labels,depth=2)
test_labels=tf.one_hot(test_labels,depth=2)

===============================================

In [9]:
def vectorize_sequences(sequences, dimension=10000):     
    results = np.zeros((len(sequences), dimension))      
    for i, sequence in enumerate(sequences):         
        results[i, sequence] = 1.      
    return results 
In [10]:
# 将训练数据向量化 
x_train = vectorize_sequences(train_data)  
# 将测试数据向量化
x_test = vectorize_sequences(test_data)  
In [11]:
print(x_train.shape)
print(x_test.shape)
(25000, 10000)
(25000, 10000)
In [12]:
print(x_train[0])
[0. 1. 1. ... 0. 0. 0.]
In [ ]:
print(type(x_train[0]))
In [ ]:
for i in x_train[0]:
    #print(i)
    pass
In [ ]:
# 感觉就是把出现这个单词的位置的编号这里的数置为1
# 没出现就置为0
# 所以如果一句话单词对应的数字为1、5、7,那么这个10000维向量1/5/7的位置为1,其它为0
x_train_0=[]
for item in enumerate(x_train[0]):
    # print(item)
    # print(item[0],item[1])
    if(int(item[1])==1):
        x_train_0.append(item[0])
        #print(item[0])
print(x_train_0)

==============================================

测试方法

In [ ]:
t1_data=[[8,2,5],[3,5,1]]
t1_data=np.array(t1_data)
print(t1_data)
In [ ]:
print(type(t1_data[0]))
In [ ]:
results1 = np.zeros((len(t1_data), 10))      
print(results1)
In [ ]:
# 批量赋值
results1[0,[1,3,5]]=1
print(results1)
In [ ]:
for i, sequence1 in enumerate(t1_data):  
    print(i,sequence1)
    results1[i, sequence1] = 1.  
print(results1)

========================================

标签向量化

也就是将整形化为浮点型

In [13]:
print(train_labels)
print(test_labels)
[1 0 0 ... 0 1 0]
[0 1 1 ... 0 0 0]
In [14]:
y_train = np.asarray(train_labels).astype('float32') 
y_test = np.asarray(test_labels).astype('float32')
In [15]:
print(y_train)
print(y_test)
[1. 0. 0. ... 0. 1. 0.]
[0. 1. 1. ... 0. 0. 0.]

3、构建模型

In [16]:
model = tf.keras.models.Sequential() 
model.add(tf.keras.layers.Dense(16, activation='relu', input_shape=(10000,))) 
model.add(tf.keras.layers.Dense(16, activation='relu')) 
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
In [17]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 16)                160016    
_________________________________________________________________
dense_1 (Dense)              (None, 16)                272       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 17        
=================================================================
Total params: 160,305
Trainable params: 160,305
Non-trainable params: 0
_________________________________________________________________

神经网络结构:10000->16->16->1

所以参数为:160016(10000*16+16)->272(16*16+16)->17(16*1+1)

4、训练模型

In [20]:
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
In [21]:
model.compile(
    optimizer='rmsprop',               
    loss='binary_crossentropy',    
    metrics=['accuracy'])
In [22]:
history = model.fit(
    partial_x_train,                     
    partial_y_train,                     
    epochs=20,                     
    batch_size=512,                     
    validation_data=(x_val, y_val)) 
Epoch 1/20
30/30 [==============================] - 1s 39ms/step - loss: 0.5100 - accuracy: 0.7898 - val_loss: 0.4187 - val_accuracy: 0.8261
Epoch 2/20
30/30 [==============================] - 1s 23ms/step - loss: 0.3021 - accuracy: 0.9041 - val_loss: 0.3380 - val_accuracy: 0.8622
Epoch 3/20
30/30 [==============================] - 1s 23ms/step - loss: 0.2207 - accuracy: 0.9286 - val_loss: 0.2784 - val_accuracy: 0.8893
Epoch 4/20
30/30 [==============================] - 1s 23ms/step - loss: 0.1763 - accuracy: 0.9414 - val_loss: 0.2847 - val_accuracy: 0.8850
Epoch 5/20
30/30 [==============================] - 1s 23ms/step - loss: 0.1417 - accuracy: 0.9545 - val_loss: 0.2826 - val_accuracy: 0.8882
Epoch 6/20
30/30 [==============================] - 1s 23ms/step - loss: 0.1172 - accuracy: 0.9649 - val_loss: 0.2944 - val_accuracy: 0.8858
Epoch 7/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0943 - accuracy: 0.9734 - val_loss: 0.3138 - val_accuracy: 0.8821
Epoch 8/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0833 - accuracy: 0.9759 - val_loss: 0.3526 - val_accuracy: 0.8774
Epoch 9/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0687 - accuracy: 0.9809 - val_loss: 0.3644 - val_accuracy: 0.8787
Epoch 10/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0566 - accuracy: 0.9853 - val_loss: 0.3836 - val_accuracy: 0.8755
Epoch 11/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0453 - accuracy: 0.9893 - val_loss: 0.4044 - val_accuracy: 0.8770
Epoch 12/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0397 - accuracy: 0.9906 - val_loss: 0.4299 - val_accuracy: 0.8736
Epoch 13/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0309 - accuracy: 0.9940 - val_loss: 0.4794 - val_accuracy: 0.8707
Epoch 14/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0267 - accuracy: 0.9945 - val_loss: 0.4887 - val_accuracy: 0.8705
Epoch 15/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0238 - accuracy: 0.9953 - val_loss: 0.5195 - val_accuracy: 0.8711
Epoch 16/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0137 - accuracy: 0.9989 - val_loss: 0.5483 - val_accuracy: 0.8706
Epoch 17/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0144 - accuracy: 0.9980 - val_loss: 0.5897 - val_accuracy: 0.8687
Epoch 18/20
30/30 [==============================] - 1s 22ms/step - loss: 0.0100 - accuracy: 0.9991 - val_loss: 0.6161 - val_accuracy: 0.8660
Epoch 19/20
30/30 [==============================] - 1s 23ms/step - loss: 0.0097 - accuracy: 0.9985 - val_loss: 0.6582 - val_accuracy: 0.8669
Epoch 20/20
30/30 [==============================] - 1s 21ms/step - loss: 0.0060 - accuracy: 0.9998 - val_loss: 0.6865 - val_accuracy: 0.8646
In [28]:
history = model.fit(
    x_test,                     
    y_test,                     
    epochs=20,                     
    batch_size=512,                     
    validation_data=(x_val, y_val)) 
Epoch 1/20
49/49 [==============================] - 1s 23ms/step - loss: 0.4419 - accuracy: 0.8630 - val_loss: 0.3042 - val_accuracy: 0.8806
Epoch 2/20
49/49 [==============================] - 1s 17ms/step - loss: 0.2657 - accuracy: 0.8940 - val_loss: 0.2920 - val_accuracy: 0.8861
Epoch 3/20
49/49 [==============================] - 1s 17ms/step - loss: 0.2160 - accuracy: 0.9134 - val_loss: 0.2971 - val_accuracy: 0.8890
Epoch 4/20
49/49 [==============================] - 1s 17ms/step - loss: 0.1794 - accuracy: 0.9275 - val_loss: 0.3325 - val_accuracy: 0.8802
Epoch 5/20
49/49 [==============================] - 1s 17ms/step - loss: 0.1508 - accuracy: 0.9394 - val_loss: 0.3348 - val_accuracy: 0.8842
Epoch 6/20
49/49 [==============================] - 1s 17ms/step - loss: 0.1257 - accuracy: 0.9493 - val_loss: 0.3585 - val_accuracy: 0.8850
Epoch 7/20
49/49 [==============================] - 1s 17ms/step - loss: 0.1058 - accuracy: 0.9582 - val_loss: 0.4035 - val_accuracy: 0.8793
Epoch 8/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0873 - accuracy: 0.9656 - val_loss: 0.4354 - val_accuracy: 0.8803
Epoch 9/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0728 - accuracy: 0.9714 - val_loss: 0.4747 - val_accuracy: 0.8792
Epoch 10/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0601 - accuracy: 0.9777 - val_loss: 0.5318 - val_accuracy: 0.8755
Epoch 11/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0480 - accuracy: 0.9831 - val_loss: 0.5713 - val_accuracy: 0.8731
Epoch 12/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0392 - accuracy: 0.9864 - val_loss: 0.6050 - val_accuracy: 0.8746
Epoch 13/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0303 - accuracy: 0.9900 - val_loss: 0.6676 - val_accuracy: 0.8728
Epoch 14/20
49/49 [==============================] - 1s 18ms/step - loss: 0.0237 - accuracy: 0.9924 - val_loss: 0.7142 - val_accuracy: 0.8712
Epoch 15/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0176 - accuracy: 0.9952 - val_loss: 0.7839 - val_accuracy: 0.8704
Epoch 16/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0143 - accuracy: 0.9960 - val_loss: 0.8463 - val_accuracy: 0.8692
Epoch 17/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0106 - accuracy: 0.9973 - val_loss: 0.9026 - val_accuracy: 0.8687
Epoch 18/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0092 - accuracy: 0.9977 - val_loss: 0.9356 - val_accuracy: 0.8690
Epoch 19/20
49/49 [==============================] - 1s 17ms/step - loss: 0.0048 - accuracy: 0.9993 - val_loss: 1.0125 - val_accuracy: 0.8683
Epoch 20/20
49/49 [==============================] - 1s 16ms/step - loss: 0.0040 - accuracy: 0.9993 - val_loss: 1.0841 - val_accuracy: 0.8676

5、检验模型

In [23]:
history_dict = history.history 
loss_values = history_dict['loss'] 
val_loss_values = history_dict['val_loss'] 
 
epochs = range(1, len(loss_values) + 1) 
 
plt.plot(epochs, loss_values, 'bo', label='Training loss')   
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')   
plt.title('Training and validation loss') 
plt.xlabel('Epochs') 
plt.ylabel('Loss') 
plt.legend() 

plt.show()
In [26]:
plt.clf()    
acc = history_dict['accuracy']  
val_acc = history_dict['val_accuracy'] 
 
plt.plot(epochs, acc, 'bo', label='Training acc') 
plt.plot(epochs, val_acc, 'b', label='Validation acc') 
plt.title('Training and validation accuracy') 
plt.xlabel('Epochs') 
plt.ylabel('Accuracy') 
plt.legend() 
 
plt.show()
In [27]:
model.predict(x_test) 
Out[27]:
array([[0.00620641],
       [1.        ],
       [0.886647  ],
       ...,
       [0.00238384],
       [0.01434717],
       [0.6769868 ]], dtype=float32)
In [ ]:
 

 

 

 
posted @ 2020-10-06 15:59  范仁义  阅读(516)  评论(0)    收藏  举报