微信扫一扫打赏支持

《python深度学习》笔记---3.6、预测房价:回归问题

《python深度学习》笔记---3.6、预测房价:回归问题

一、总结

一句话总结:

(404, 13)对应模型输入是13维度:因为train_data的结构是(404, 13),也就是13维,所以模型输入维度就是13维
输出层一个神经元拟合任何数:因为是回归问题,所以输出层就是只有一个神经元的dense层
【减均值,除标准差】:因为数据涉及的特征的标度比较乱,所以减均值,除标准差

 

 

1、logistic回归不是回归算法?

logistic回归是分类算法:不要将回归问题与 logistic 回归算法混为一谈。令人困惑的是,logistic 回归不是回归算法, 而是分类算法。

 

 

2、注意,用于测试数据标准化的均值和标准差都是在训练数据上计算得到的?

不能使用在测试数据上的任何数据:在工作流程中,你不能使用在测试数据上计算得到的任何结果,即使是像数据标准化这么简单的事情也不行。

 

 

3、回归问题:最后一层是纯线性的,所以 网络可以学会预测任意范围内的值?

回归问题网络最后一层没有激活函数:网络的最后一层只有一个单元,没有激活,是一个线性层。这是标量回归(标量回归是预 测单一连续值的回归)的典型设置。
添加激活函数将会限制输出范围:例如,如果向最后一层 添加 sigmoid 激活函数,网络只能学会预测0~1 范围内的值。

 

 

4、均方误差(MSE,mean squared error)和平均绝对误差(MAE,mean absolute error)?

目标值之差的绝对值:平均绝对误差(MAE,mean absolute error)
预测值与目标值之差的平方:均方误差(MSE,mean squared error)

 

 

5、为什么使用 K 折交叉验证?

数据集小验证分数波动大:但由于数据点很少,验证集会非常小(比如大约 100 个样本)。因此,验证分数可能会有很大波动,这取决于你所选择的验证集和训练集。
验证集的划分方式可能会造成验证分数上有很大的方差:也就是说,验证集的划分方式可能会造成验证分数上有很大的方差,这样就无法对模型进行可靠的评估。

 

 

6、数据标准化(train_data的维度是(404, 13))?

减均值除标准差:train_data -= mean;train_data /= std
批量操作:mean = train_data.mean(axis=0)
mean = train_data.mean(axis=0) 
train_data -= mean 
std = train_data.std(axis=0) 
train_data /= std 
 
test_data -= mean 
test_data /= std

 

7、K 折交叉验证?

训练集上训练和测试:在训练集上就是一部分数据做训练,一部分数据做测试,然后再在测试集上测试
训练集上的训练数据和测试数据是轮换的:比如把训练数据分5份,那么1/2/3/4/5分别做测试数据
代码操作就是分别选训练和测试数据:代码非常简单,选出训练数据,选出测试数据
import numpy as np 
 
k = 4 
num_val_samples = len(train_data) // k 
num_epochs = 100 
all_scores = [] 
 
for i in range(k):     
    print('processing fold #', i)     
    val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]      
    val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples] 

    partial_train_data = np.concatenate(           
        [train_data[:i * num_val_samples],          
        train_data[(i + 1) * num_val_samples:]],          
        axis=0)     
    partial_train_targets = np.concatenate(         
        [train_targets[:i * num_val_samples],          
        train_targets[(i + 1) * num_val_samples:]],          
        axis=0) 
 
    model = build_model()      
    model.fit(partial_train_data, partial_train_targets,                
        epochs=num_epochs, batch_size=1, verbose=0)     
    val_mse,  val_mae = model.evaluate(val_data, val_targets, verbose=0)      
    all_scores.append(val_mae)

 

 

 

二、预测房价:回归问题

博客对应课程的视频位置:

 

import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

步骤

1、读取数据集
2、格式化数据集(方便数据使用)
3、构建模型
4、训练模型
5、检验模型

需求

1、读取数据集

In [2]:
(train_data, train_targets), (test_data, test_targets) = tf.keras.datasets.boston_housing.load_data()
In [3]:
print(train_data.shape)
print(train_targets.shape)
print(test_data.shape)
print(test_targets.shape)
(404, 13)
(404,)
(102, 13)
(102,)
In [4]:
print(train_data[0:2])
print(train_targets[0:2])
[[1.23247e+00 0.00000e+00 8.14000e+00 0.00000e+00 5.38000e-01 6.14200e+00
  9.17000e+01 3.97690e+00 4.00000e+00 3.07000e+02 2.10000e+01 3.96900e+02
  1.87200e+01]
 [2.17700e-02 8.25000e+01 2.03000e+00 0.00000e+00 4.15000e-01 7.61000e+00
  1.57000e+01 6.27000e+00 2.00000e+00 3.48000e+02 1.47000e+01 3.95380e+02
  3.11000e+00]]
[15.2 42.3]

2、格式化数据集

数据标准化

In [5]:
mean = train_data.mean(axis=0) 
train_data -= mean 
std = train_data.std(axis=0) 
train_data /= std 
 
test_data -= mean 
test_data /= std
In [6]:
print(mean)
[3.74511057e+00 1.14801980e+01 1.11044307e+01 6.18811881e-02
 5.57355941e-01 6.26708168e+00 6.90106436e+01 3.74027079e+00
 9.44059406e+00 4.05898515e+02 1.84759901e+01 3.54783168e+02
 1.27408168e+01]
In [7]:
print(std)
[9.22929073e+00 2.37382770e+01 6.80287253e+00 2.40939633e-01
 1.17147847e-01 7.08908627e-01 2.79060634e+01 2.02770050e+00
 8.68758849e+00 1.66168506e+02 2.19765689e+00 9.39946015e+01
 7.24556085e+00]

3、构建模型

13->64->64->1

因为输入数据是13维度

In [19]:
print(train_data.shape[1])
13
In [9]:
model = tf.keras.models.Sequential()       
model.add(tf.keras.layers.Dense(64, activation='relu',input_shape=(train_data.shape[1],)))     
model.add(tf.keras.layers.Dense(64, activation='relu'))     
model.add(tf.keras.layers.Dense(1)) 
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 64)                896       
_________________________________________________________________
dense_1 (Dense)              (None, 64)                4160      
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 65        
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________

4、训练模型

In [13]:
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
history = model.fit(train_data, train_targets, batch_size=32, epochs=500, validation_data=(test_data, test_targets))
Epoch 1/500
13/13 [==============================] - 0s 11ms/step - loss: 1.4033 - mae: 0.8724 - val_loss: 11.2357 - val_mae: 2.4261
Epoch 2/500
13/13 [==============================] - 0s 5ms/step - loss: 0.9427 - mae: 0.6851 - val_loss: 10.9043 - val_mae: 2.3768
Epoch 3/500
13/13 [==============================] - 0s 4ms/step - loss: 1.0428 - mae: 0.7280 - val_loss: 10.4532 - val_mae: 2.2693
Epoch 4/500
13/13 [==============================] - 0s 5ms/step - loss: 0.9874 - mae: 0.7185 - val_loss: 10.4946 - val_mae: 2.3159
Epoch 5/500
13/13 [==============================] - 0s 4ms/step - loss: 0.9864 - mae: 0.7013 - val_loss: 10.9446 - val_mae: 2.3195
Epoch 6/500
13/13 [==============================] - 0s 4ms/step - loss: 1.0437 - mae: 0.7546 - val_loss: 10.7265 - val_mae: 2.3661
Epoch 7/500
13/13 [==============================] - 0s 4ms/step - loss: 1.0438 - mae: 0.7458 - val_loss: 11.0252 - val_mae: 2.3670
......
Epoch 497/500
13/13 [==============================] - 0s 5ms/step - loss: 0.4600 - mae: 0.4860 - val_loss: 11.6482 - val_mae: 2.5032
Epoch 498/500
13/13 [==============================] - 0s 5ms/step - loss: 0.4295 - mae: 0.4783 - val_loss: 12.2151 - val_mae: 2.5380
Epoch 499/500
13/13 [==============================] - 0s 4ms/step - loss: 0.5045 - mae: 0.4952 - val_loss: 11.5870 - val_mae: 2.4328
Epoch 500/500
13/13 [==============================] - 0s 4ms/step - loss: 0.4438 - mae: 0.4624 - val_loss: 11.3012 - val_mae: 2.4285
In [14]:
plt.plot(history.epoch,history.history.get('loss'),'b--',label='train_loss')
plt.plot(history.epoch,history.history.get('val_loss'),'r-',label='test_loss')
plt.title("loss")
plt.legend()
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('mae'),'b--',label='train_mae')
plt.plot(history.epoch,history.history.get('val_mae'),'r-',label='test_mae')
plt.title("mae")
plt.legend()
plt.show()

5、检验模型

In [18]:
pridict_y=model.predict(test_data)
# print(pridict_y)
# print(test_targets)
In [17]:
test_y = test_targets
pridict_y=pridict_y.flatten()
test_y=np.array(test_y)
print(test_y)
print(pridict_y)
print(test_y-pridict_y)
[ 7.2 18.8 19.  27.  22.2 24.5 31.2 22.9 20.5 23.2 18.6 14.5 17.8 50.
 20.8 24.3 24.2 19.8 19.1 22.7 12.  10.2 20.  18.5 20.9 23.  27.5 30.1
  9.5 22.  21.2 14.1 33.1 23.4 20.1  7.4 15.4 23.8 20.1 24.5 33.  28.4
 14.1 46.7 32.5 29.6 28.4 19.8 20.2 25.  35.4 20.3  9.7 14.5 34.9 26.6
  7.2 50.  32.4 21.6 29.8 13.1 27.5 21.2 23.1 21.9 13.  23.2  8.1  5.6
 21.7 29.6 19.6  7.  26.4 18.9 20.9 28.1 35.4 10.2 24.3 43.1 17.6 15.4
 16.2 27.1 21.4 21.5 22.4 25.  16.6 18.6 22.  42.8 35.1 21.5 36.  21.9
 24.1 50.  26.7 25. ]
[ 8.179475 19.18562  20.56756  28.007307 23.014872 23.301495 28.943293
 22.105633 19.93745  20.389534 28.86558  17.563843 15.652682 45.801353
 18.408344 22.459646 24.44728  19.871622 19.041067 22.00699   9.514465
 11.386035 20.403801 13.721691 16.42657  21.749489 28.763397 31.66381
 11.446484 20.60264  19.014132 17.144209 31.363287 23.790178 25.020218
  7.566932 18.961458 18.088972 18.55346  26.601353 29.015308 26.616762
 12.410012 46.794353 31.93799  32.909035 28.956573 20.674675 19.139225
 23.500257 36.84088  24.904589 12.613471 13.778407 37.209167 28.03156
 10.650642 49.195538 32.61445  24.718657 20.621464 12.212226 18.410444
 21.782688 24.10987  20.056698 13.899408 22.1465   12.356523  7.560722
 19.94562  29.652323 26.594627 11.727043 23.828306 21.695612 19.536566
 24.405844 39.87596  10.867122 23.834948 39.214317 18.14407  13.481483
 18.598074 16.166594 22.679417 21.89931  26.408024 23.293655 19.30358
 16.85877  25.455296 42.920017 39.298115 19.608139 37.82789  33.60365
 23.609737 44.211224 28.335894 20.136402]
[ -0.97947483  -0.38561935  -1.5675602   -1.00730705  -0.8148716
   1.1985054    2.25670738   0.79436722   0.56254959   2.810466
 -10.26557961  -3.06384277   2.1473177    4.19864655   2.39165573
   1.84035378  -0.24728088  -0.07162209   0.05893288   0.69300957
   2.48553467  -1.18603497  -0.40380096   4.77830887   4.47342911
   1.25051117  -1.26339722  -1.56381073  -1.94648361   1.39735985
   2.18586845  -3.04420891   1.73671303  -0.3901783   -4.9202179
  -0.1669322   -3.56145821   5.71102791   1.54654083  -2.10135269
   3.98469162   1.78323784   1.68998775  -0.09435349   0.56200981
  -3.30903473  -0.55657349  -0.87467499   1.06077499   1.49974251
  -1.44088135  -4.6045887   -2.91347103   0.7215929   -2.30916748
  -1.4315609   -3.4506424    0.80446243  -0.21444855  -3.11865654
   9.17853622   0.88777409   9.08955574  -0.58268814  -1.00987091
   1.84330215  -0.89940834   1.05350037  -4.25652256  -1.96072187
   1.75438042  -0.05232277  -6.99462738  -4.72704315   2.5716938
  -2.79561195   1.36343422   3.69415627  -4.4759613   -0.6671217
   0.46505241   3.88568268  -0.54406967   1.91851654  -2.39807396
  10.93340645  -1.27941666  -0.39930916  -4.00802383   1.7063446
  -2.70357933   1.74122963  -3.45529556  -0.12001724  -4.19811478
   1.89186096  -1.82788849 -11.70364914   0.4902626    5.7887764
  -1.63589363   4.86359787]
In [ ]:
 
 
posted @ 2020-10-07 02:57  范仁义  阅读(424)  评论(0编辑  收藏  举报