微信扫一扫打赏支持

《python深度学习》笔记---3.5、路透社数据集:多分类问题

《python深度学习》笔记---3.5、路透社数据集:多分类问题

一、总结

一句话总结:

对于文字分类,可以转化为类似one_hot的方式,也就是单词数字对应的位置为1
def vectorize_sequences(sequences, dimension=10000):     
    results = np.zeros((len(sequences), dimension))     
    for i, sequence in enumerate(sequences):         
        results[i, sequence] = 1.     
    return results

 

 

1、将标签向量化有两种方法?

整数张量:你可以将标签列表转换为整数张量,
one-hot:或者使用one-hot 编码。

 

2、如果是分46类,中间层最好大于46?

信息无法被找回:对于前面用过的 Dense 层的堆叠,每层只能访问上一层输出的信息。如果某一层丢失了与 分类问题相关的一些信息,那么这些信息无法被后面的层找回,也就是说,每一层都可能成为 信息瓶颈。
维度较小的层可能成为信息瓶颈:上一个例子使用了16 维的中间层,但对这个例子来说16 维空间可能太小了,无法 学会区分 46 个不同的类别。这种维度较小的层可能成为信息瓶颈,永久地丢失相关信息。

 

 

3、多分类损失函数用分类交叉熵?

多分类的损失函数几乎总是应该使用分类交叉熵:model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])

 

 

 

二、路透社数据集:多分类问题

博客对应课程的视频位置:

 

import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

步骤

1、读取数据集
2、格式化数据集(方便数据使用)
3、构建模型
4、训练模型
5、检验模型

需求

1、读取数据集

In [2]:
(train_data, train_labels), (test_data, test_labels) = tf.keras.datasets.reuters.load_data(num_words=10000)

2、格式化数据集

格式化X数据

In [3]:
def vectorize_sequences(sequences, dimension=10000):     
    results = np.zeros((len(sequences), dimension))     
    for i, sequence in enumerate(sequences):         
        results[i, sequence] = 1.     
    return results
In [4]:
x_train = vectorize_sequences(train_data)   
x_test = vectorize_sequences(test_data) 

格式化Y数据

In [5]:
print(train_labels)
[ 3  4  3 ... 25  3 25]
In [6]:
print(max(train_labels))
45
In [7]:
y_train=tf.one_hot(train_labels,depth=46)
y_test=tf.one_hot(test_labels,depth=46)
print(y_train[0:2])
tf.Tensor(
[[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
  0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
  0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]], shape=(2, 46), dtype=float32)

3、构建模型

10000->256->128->46

In [8]:
# 构建容器
model = tf.keras.Sequential()
# 输入层
model.add(tf.keras.Input(shape=(10000,)))
# 中间层
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(46,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 256)               2560256   
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 46)                5934      
=================================================================
Total params: 2,599,086
Trainable params: 2,599,086
Non-trainable params: 0
_________________________________________________________________

4、训练模型

In [9]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(x_train,y_train,epochs=20,validation_data=(x_test,y_test))
Epoch 1/20
281/281 [==============================] - 2s 6ms/step - loss: 1.2356 - acc: 0.7234 - val_loss: 0.8920 - val_acc: 0.7921
Epoch 2/20
281/281 [==============================] - 1s 5ms/step - loss: 0.4190 - acc: 0.9047 - val_loss: 0.8717 - val_acc: 0.7992
Epoch 3/20
281/281 [==============================] - 1s 5ms/step - loss: 0.2399 - acc: 0.9412 - val_loss: 0.9068 - val_acc: 0.8010
Epoch 4/20
281/281 [==============================] - 1s 5ms/step - loss: 0.1853 - acc: 0.9477 - val_loss: 0.9228 - val_acc: 0.8050
Epoch 5/20
281/281 [==============================] - 1s 5ms/step - loss: 0.1590 - acc: 0.9507 - val_loss: 1.0049 - val_acc: 0.8059
Epoch 6/20
281/281 [==============================] - 1s 5ms/step - loss: 0.1424 - acc: 0.9507 - val_loss: 0.9781 - val_acc: 0.8028
Epoch 7/20
281/281 [==============================] - 1s 5ms/step - loss: 0.1238 - acc: 0.9523 - val_loss: 1.0553 - val_acc: 0.7988
Epoch 8/20
281/281 [==============================] - 1s 5ms/step - loss: 0.1210 - acc: 0.9498 - val_loss: 1.0543 - val_acc: 0.8001
Epoch 9/20
281/281 [==============================] - 1s 5ms/step - loss: 0.1105 - acc: 0.9505 - val_loss: 1.1332 - val_acc: 0.7979
Epoch 10/20
281/281 [==============================] - 1s 5ms/step - loss: 0.1014 - acc: 0.9541 - val_loss: 1.1926 - val_acc: 0.7903
Epoch 11/20
281/281 [==============================] - 2s 5ms/step - loss: 0.0995 - acc: 0.9528 - val_loss: 1.1863 - val_acc: 0.7983
Epoch 12/20
281/281 [==============================] - 1s 5ms/step - loss: 0.0918 - acc: 0.9538 - val_loss: 1.2331 - val_acc: 0.7983
Epoch 13/20
281/281 [==============================] - 1s 5ms/step - loss: 0.0875 - acc: 0.9542 - val_loss: 1.2390 - val_acc: 0.8010
Epoch 14/20
281/281 [==============================] - 1s 5ms/step - loss: 0.0847 - acc: 0.9548 - val_loss: 1.3483 - val_acc: 0.7912
Epoch 15/20
281/281 [==============================] - 1s 5ms/step - loss: 0.0882 - acc: 0.9557 - val_loss: 1.3664 - val_acc: 0.7894
Epoch 16/20
281/281 [==============================] - 1s 5ms/step - loss: 0.0815 - acc: 0.9551 - val_loss: 1.3655 - val_acc: 0.7996
Epoch 17/20
281/281 [==============================] - 2s 6ms/step - loss: 0.0777 - acc: 0.9548 - val_loss: 1.4519 - val_acc: 0.7925
Epoch 18/20
281/281 [==============================] - 2s 6ms/step - loss: 0.0772 - acc: 0.9548 - val_loss: 1.4342 - val_acc: 0.7961
Epoch 19/20
281/281 [==============================] - 2s 5ms/step - loss: 0.0833 - acc: 0.9542 - val_loss: 1.3992 - val_acc: 0.7907
Epoch 20/20
281/281 [==============================] - 2s 5ms/step - loss: 0.0765 - acc: 0.9581 - val_loss: 1.5665 - val_acc: 0.7921
In [10]:
plt.plot(history.epoch,history.history.get('loss'),'b--',label='train_loss')
plt.plot(history.epoch,history.history.get('val_loss'),'r-',label='test_loss')
plt.title("loss")
plt.legend()
plt.show()
In [11]:
plt.plot(history.epoch,history.history.get('acc'),'b--',label='train_acc')
plt.plot(history.epoch,history.history.get('val_acc'),'r-',label='test_acc')
plt.title("acc")
plt.legend()
plt.show()

5、检验模型

In [12]:
pridict_y=model.predict(x_test)
print(pridict_y)
print(y_test)
[[7.35711048e-09 6.89831436e-09 1.65699458e-13 ... 1.47590251e-09
  7.46733892e-13 7.76215810e-12]
 [1.00025721e-02 1.66648746e-01 6.04209825e-02 ... 6.16808506e-07
  1.41519067e-07 1.48115179e-03]
 [9.17008128e-06 9.89150465e-01 1.16959476e-04 ... 1.20810384e-07
  1.90381849e-10 2.47081289e-07]
 ...
 [8.38777572e-12 2.47849048e-08 9.42601134e-15 ... 3.98744102e-12
  3.03298146e-16 1.18143519e-13]
 [2.00062408e-04 1.17464103e-01 8.72442790e-04 ... 3.53586111e-05
  1.96111196e-05 2.86576851e-05]
 [1.82826443e-06 9.94272888e-01 1.03191494e-04 ... 4.43007025e-07
  1.36166245e-10 2.36499673e-07]]
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(2246, 46), dtype=float32)
In [13]:
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
y_test = tf.argmax(y_test, axis=1)
print(y_test)
tf.Tensor([ 3 10  1 ...  3  3  1], shape=(2246,), dtype=int64)
tf.Tensor([ 3 10  1 ...  3  3 24], shape=(2246,), dtype=int64)
In [ ]:
 
 
posted @ 2020-10-07 01:31  范仁义  阅读(498)  评论(0编辑  收藏  举报