牛客题解 | 具有反向传播的单神经元
题目
具有反向传播的单神经元(Single Neuron with Backpropagation)是神经网络中的最常见的基本单元。
本算法的步骤如下:
- 初始化权重和偏置\[w = \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} \]\[b = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix} \]
- 前向传播,计算预测值\[z = w \cdot x + b \]\[predictions = sigmoid(z) \]
- 反向传播,计算梯度\[errors = predictions - labels \]\[weight_gradients = (2/len(labels)) * np.dot(features.T, errors * predictions * (1 - predictions)) \]\[bias_gradient = (2/len(labels)) * np.sum(errors * predictions * (1 - predictions)) \]本题选择的均方误差作为损失函数,公式为\[mse = \frac{1}{n} \sum_{i=1}^{n} (predictions_i - labels_i)^2 \]因此,梯度计算公式为\[weight_gradients = (2/len(labels)) * np.dot(features.T, errors * predictions * (1 - predictions)) \]\[bias_gradient = (2/len(labels)) * np.sum(errors * predictions * (1 - predictions)) \]
- 更新权重和偏置\[w = w - learning_rate * weight_gradients \]\[b = b - learning_rate * bias_gradient \]
- 重复步骤2-4,直到达到最大迭代次数
标准代码如下
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def train_neuron(features, labels, initial_weights, initial_bias, learning_rate, epochs):
weights = np.array(initial_weights)
bias = initial_bias
features = np.array(features)
labels = np.array(labels)
mse_values = []
for _ in range(epochs):
z = np.dot(features, weights) + bias
predictions = sigmoid(z)
mse = np.mean((predictions - labels) ** 2)
mse_values.append(round(mse, 4))
# Gradient calculation for weights and bias
errors = predictions - labels
weight_gradients = (2/len(labels)) * np.dot(features.T, errors * predictions * (1 - predictions))
bias_gradient = (2/len(labels)) * np.sum(errors * predictions * (1 - predictions))
# Update weights and bias
weights -= learning_rate * weight_gradients
bias -= learning_rate * bias_gradient
# Round weights and bias for output
updated_weights = np.round(weights, 4)
updated_bias = round(bias, 4)
return updated_weights.tolist(), updated_bias, mse_values

浙公网安备 33010602011771号