05machine_learning_Multiple linear regression
Multiple linear regression
.png)
When the situation involve multiple features,
we use
Model
previously : $f_{w,b}=wx+b $
when the varieties are n, then we get
$f_{\vec{w},b}(\vec{x})=w_1x_1+w_2x_2+...+w_nx_n+b $
we get parameters of the model
$\vec{w}=[w_1\ w_2\ w_3\ ...w_n] $
b is a number
$\vec{X}=[X_1\ X_2\ X_3...X_n] $
then we get:
$f_{\vec{w},b}(\vec{x}) =\vec{W}\cdot\vec{X}+b=w_1x_1+w_2x_2+w_3x_3+...+...w_nx_n+b $
Multiple linear regression
vectorization
python numpy has a method:np.dot() can be used to do the vector calculation
.png)
the np.dot() method can parallelly calculate the features in the same time.
.png)
for example
.png)
gradient descent for multiple linear regression
previous notation
Parameters: $w_1,w_2,w_3...w_n,b $
.png)
**put all the parameters to be iterated by Gradient descent, so we can get the **
$\mathop{minimum}\limits_{w1,w2,w3...wn,b}J(w_1,w_2,w_3...w_n,b) $
The difference between one feature and more features:
.png)
we can just update these values
.png)
An alternative to gradient descent
Normal equation
- Only for linear regression
- Solve for w,b without literations
Disadvantages
- Doesn't generalize to other learning algorithms
- Slow when number of features is large(>10,000)
**What you need to know **
- Normal equation method may be used in machine learning libraries that implement linear regression.
- Gradient descent is the recommended method for finding parameters w,b

浙公网安备 33010602011771号