leNET——网络结构prototxt的详解

leNET算是caffe学习的第一个例子了,例子来源于caffe官网:http://caffe.berkeleyvision.org/gathered/examples/mnist.html

接口部分都用python写好,所以只跑跑例子的话可以先不看cpp代码

1.根据路径,我们先看总配置文件

cd $CAFFE_ROOT
./examples/mnist/train_lenet.sh

 2.打开之后,我们可以看到就两行

#!/usr/bin/env sh
./build/tools/caffe train –solver=examples/mnist/lenet_solver.prototxt(依赖配置文件)

 我这里叫 lenet_solver.prototxt 依赖配置文件吧,关键是solver.prototxt

3.然后打开依赖配置文件

 

# The train/test net protocol buffer definition(制定训练和测试模型)
net: "examples/mnist/lenet_train_test.prototxt"(网络配置文件位置)

# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100			(1次100个测试集样本参与向前计算)

# Carry out testing every 500 training iterations.
test_interval: 500		(每训练500次进行一次测试)

# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01			(基础学习率)
momentum: 0.9		(动量)
weight_decay: 0.0005		(权重衰减)

# The learning rate policy	(学习策略)
lr_policy: "inv"		(inv: return base_lr * (1 + gamma * iter) ^ (- power))
gamma: 0.0001
power: 0.75

# Display every 100 iterations
display: 100()		(每迭代100次打印结果)

# The maximum number of iterations
max_iter: 10000		(最大迭代次数)

# snapshot intermediate results
snapshot: 5000		(5000次迭代保存一次临时模型,名称为lenet_iter_5000.caffemodel)
snapshot_prefix: "examples/mnist/lenet"

# solver mode: CPU or GPU
solver_mode: GPU		(GPU开关)

 

 看到lenet_train_test.prototxt"(我就叫做网络配置文件吧,里面存放的是网络结构)

4.我们打开网络结构的这个文件

name: "LeNet"			网络名
layer {
  name: "mnist"			本层名称
  type: "Data"				层类型
  top: "data"				下一层接口
  top: "label"				下一层接口
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625			#1/256,预处理如减均值,尺寸变换,随机剪,镜像等
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"	训练数据位置
    batch_size: 64					一次训练的样本数
    backend: LMDB					读入的训练数据格式,默认leveldb
  }
}


layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100					一次测试使用100个数据
    backend: LMDB
  }
}


layer {
  name: "conv1"
  type: "Convolution"				卷积层
  bottom: "data"				上一层名“data”
  top: "conv1"					下一层接口“conv1”
  param {
    lr_mult: 1					(weights的学习率与全局相同)
  }
  param {
    lr_mult: 2					(biases的学习率是全局的2倍)
  }
  convolution_param {
    num_output: 20				卷积核20个
    kernel_size: 5				卷积核尺寸5×5
    stride: 1					步长1
    weight_filler {
      type: "xavier"				(随机的初始化权重和偏差)
    }
    bias_filler {
      type: "constant"				bias用0初始化
    }
  }
}


layer {
  name: "pool1"
  type: "Pooling"				池化层
  bottom: "conv1"				上层“conv1”
  top: "pool1"					下层接口“pool1”
  pooling_param {
    pool: MAX					池化函数用MAX
    kernel_size: 2				池化核函数大小2×2
    stride: 2					步长2
  }
}


layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50				卷积核50个
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}


layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}


layer {
  name: "ip1"
  type: "InnerProduct"				全连接层
  bottom: "pool2"				上层连接“pool2”
  top: "ip1"					“下层输出接口ip1”
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500				输出数量500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}


layer {
  name: "relu1"
  type: "ReLU"				激活函数
  bottom: "ip1"
  top: "ip1"	(这个地方还是ip1,底层与顶层相同减少开支,下一层全连接层的输入也还是ip1)
}


layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10				输出结果10个
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}


layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"			上层连接ip2全连接层
  bottom: "label"			上层连接label层
  top: "accuracy"			输出接口为accuracy
  include {
    phase: TEST			
  }
}


layer {
  name: "loss"
  type: "SoftmaxWithLoss"		损失函数
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

 其实我这里偷懒了,因为我一开始仔细学习的网络文件不是这个,而是另一个,下面我把详细备注的网络文件放上来

name: "LeNet"(网络的名字)
layer {
  name: "data"
  type: "Input"(层类型,输入)
  top: "data"(导入数据这一层没有bottom,因为是第一层)
  input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }(64张图为一批,28*28大小)
}
读取这批数据维度:64 1 28 28


layer {
  name: "conv1"
  type: "Convolution"(卷积类型层)
  bottom: "data"(上一层名叫做data)
  top: "conv1"(下一层名叫做conv1)
  param {
    lr_mult: 1(weights的学习率与全局相同)
  }
  param {
    lr_mult: 2(biases的学习率是全局的2倍)
  }
  convolution_param {(卷积操作参数设置)
    num_output: 20(卷积输出数量20,由20个特征图Feature Map构成)
    kernel_size: 5(卷积核的大小是5*5)
    stride: 1(卷积操作步长)
    weight_filler {
      type: "xavier"(随机的初始化权重和偏差)
    }
    bias_filler {
      type: "constant"(bias使用0初始化)
    }
  }(通过卷积之后,数据变成(28-5+1)*(28-5+1),20个特征)
}
卷积之后这批数据维度:64 20 24 24


layer {
  name: "pool1"
  type: "Pooling"(下采样类型层)
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX(下采样方式,取最大值)
    kernel_size: 2(下采样核函数size)
    stride: 2(步长)
  }
}
下采样之后这批数据维度:64 20 12 12


layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50(50个卷积核)
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
卷积之后这批数据维度:64 50 8 8


layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
下采样之后这批数据维度:64 50 4 4


layer {
  name: "ip1"
  type: "InnerProduct"(全连接类型层)
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {(全连接层参数设置)
    num_output: 500(输出为500)
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }(4*4的数据通过4*4的卷积得到1*1的数据)
}
通过全连接层之后这批数据维度:64 500 1 1


layer {
  name: "relu1"
  type: "ReLU"(激活函数类型层)
  bottom: "ip1"
  top: "ip1"(这个地方还是ip1,底层与顶层相同减少开支,下一层全连接层的输入也还是ip1)
}
通过ReLU层之后这批数据维度:64 500 1 1(不做改变)

layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10(直接输出结果,0-9,十个数字所以维度是10)
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }(数据的分类判断在这一层中完成)
}
通过全连接层之后这批数据维度:64 10 1 1


layer {
  name: "prob"
  type: "Softmax"(损失函数)
  bottom: "ip2"
  top: "prob"(一开始数据输入为date的话,这里写label)
}

要注意激活层那里的输入和输出都是一个口子,为的是节省资源

 

 

暂时先这么写了,因为重点是要做图像的,现在在看如何将jpg变成lmdb,然后自己写网络跑通论文的代码,以上信息如有不正确的地方请指正,3Q

 

posted on 2016-07-23 14:27  Evence  阅读(1586)  评论(1编辑  收藏  举报