CNN压缩:为反向传播添加mask(caffe代码修改)

神经网络压缩的研究近三年十分热门,笔者查阅到相关的两篇博客,博主们非常奉献的提供了源代码,但是发发现在使用gpu训练添加mask的网络上,稍微有些不顺,特此再进行详细说明。

此文是在 基于Caffe的CNN剪枝[1]和 Deep Compression阅读理解及Caffe源码修改[2] 的基础上修改的。

mask的结构?

[1]中使用的blob,存储mask。blob是一块数据块,在初始化时,需要为gpu上的数据块申请一块空间,故有Addmask()函数。AddMask()是blob.hpp中的blob的成员方法,需要在blob.cpp中实现。使用时将Addmask()添加在innerproduct.cpp和base_conv.cpp中,使得网络在setuplayer的过程中,为fc层和conv层多开辟一块存放mask的syncedmemory。blob有一系列需要实现的cpu_data()/mutable_cpu_data()等,初始化中改变mask的值时需要注意使用合理的方式。

InnerProductLayer.cpp

1 void InnerProductLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
2       const vector<Blob<Dtype>*>& top) {
3     ...
4     this->blobs_[0].reset(new Blob<Dtype>(weight_shape));
5     this->blobs_[0]->Addmask();
6     ...}

base_conv.cpp:

1 template <typename Dtype>
2 void BaseConvolutionLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
3       const vector<Blob<Dtype>*>& top) {
4     ...
5     this->blobs_[0].reset(new Blob<Dtype>(weight_shape));
6     this->blobs_[0]->Addmask();
7     ...}

修改blob.hpp和blob.cpp,添加成员mask_和相关的方法,在[1]文章的评论里作者已给出源代码。

[2]中使用layer结构定义mask,layer是相当于数据的一系列操作,或者说是blob的组合方法。

但是,想要实现在gpu上的操作,数据需要有gpu有关的操作。故此处采用[1]中的方法,将mask_添加到blob class中,实现mask_属性

mask的初始化?

在Caffe框架下,网络的初始化有两种方式,一种是调用filler,按照模型中定义的初始化方式进行初始化,第二种是从已有的caffemodel或者snapshot中读取相应参数矩阵进行初始化[1]。

1、filler的方法

在程序开始时,网络使用net.cpp中的Init()进行初始化,由输入至输出,依次调用各个层的layersetup,建立网络结构。如下所示是caffe中使用xavier方法进行填充的操作。

 1 virtual void Fill(Blob<Dtype>* blob) {
 2     CHECK(blob->count());
 3     int fan_in = blob->count() / blob->num();
 4     int fan_out = blob->count() / blob->channels();
 5     Dtype n = fan_in;  // default to fan_in
 6     if (this->filler_param_.variance_norm() ==
 7         FillerParameter_VarianceNorm_AVERAGE) {
 8       n = (fan_in + fan_out) / Dtype(2);
 9     } else if (this->filler_param_.variance_norm() ==
10         FillerParameter_VarianceNorm_FAN_OUT) {
11       n = fan_out;
12     }
13     Dtype scale = sqrt(Dtype(3) / n);
14     caffe_rng_uniform<Dtype>(blob->count(), -scale, scale,
15         blob->mutable_cpu_data());
16     //Filler<Dtype>:: FillMask(blob);
17     CHECK_EQ(this->filler_param_.sparse(), -1)
18          << "Sparsity not supported by this Filler.";
19   }

filler的作用是,为建立的网络结构产生随机初始化值。

即使是从snapshot或caffemodel中读入数据,也执行随机填充操作。

2、从snapshot或caffemodel中读入数据

tools/caffe.cpp 中的phase:train可以从snapshot或caffemodel中提取参数,进行finetune。phase:test则可以从提取的参数中建立网络,进行预测过程。

这里笔者的网络结构是在pycaffe中进行稀疏化的,因此读入网络的proto文件是一个连接数不变、存在部分连接权值为零的网络。需要在读入参数的同时初始化mask_。因此修改blob.cpp中的fromproto函数:

 1 template <typename Dtype>
 2 void Blob<Dtype>::FromProto(const BlobProto& proto, bool reshape) {
 3   if (reshape) {
 4     vector<int> shape;
 5     if (proto.has_num() || proto.has_channels() ||
 6         proto.has_height() || proto.has_width()) {
 7       // Using deprecated 4D Blob dimensions --
 8       // shape is (num, channels, height, width).
 9       shape.resize(4);
10       shape[0] = proto.num();
11       shape[1] = proto.channels();
12       shape[2] = proto.height();
13       shape[3] = proto.width();
14     } else {
15       shape.resize(proto.shape().dim_size());
16       for (int i = 0; i < proto.shape().dim_size(); ++i) {
17         shape[i] = proto.shape().dim(i);
18       }
19     }
20     Reshape(shape);
21   } else {
22     CHECK(ShapeEquals(proto)) << "shape mismatch (reshape not set)";
23   }
24   // copy data
25   Dtype* data_vec = mutable_cpu_data();
26   if (proto.double_data_size() > 0) {
27     CHECK_EQ(count_, proto.double_data_size());
28     for (int i = 0; i < count_; ++i) {
29       data_vec[i] = proto.double_data(i);
30     }
31   } else {
32     CHECK_EQ(count_, proto.data_size());
33     for (int i = 0; i < count_; ++i) {
34       data_vec[i] = proto.data(i);
35     }
36   }
37   if (proto.double_diff_size() > 0) {
38     CHECK_EQ(count_, proto.double_diff_size());
39     Dtype* diff_vec = mutable_cpu_diff();
40     for (int i = 0; i < count_; ++i) {
41       diff_vec[i] = proto.double_diff(i);
42     }
43   } else if (proto.diff_size() > 0) {
44     CHECK_EQ(count_, proto.diff_size());
45     Dtype* diff_vec = mutable_cpu_diff();
46     for (int i = 0; i < count_; ++i) {
47       diff_vec[i] = proto.diff(i);
48     }
49   }
50   if(shape_.size()==4||shape_.size()==2){
51     Dtype* mask_vec = mutable_cpu_data();
52     CHECK(count_);
53     for(int i=0;i<count_;i++)
54       mask_vec[i]=data_vec[i]?1:0;
55 }

在读入proto文件的同时,如果层的大小是4D——conv层、或2D——fc层时,初始化mask_为data_vec[i]?1:0。当层的大小是1Ds——pool或relu层时,不进行mask的初始化。

反向传播的修改?

1、修改blob的更新方式,添加math_funcion.hpp头文件。

 1 template <typename Dtype>
 2 void Blob<Dtype>::Update() {
 3   // We will perform update based on where the data is located.
 4   switch (data_->head()) {
 5   case SyncedMemory::HEAD_AT_CPU:
 6     // perform computation on CPU
 7     caffe_axpy<Dtype>(count_, Dtype(-1),
 8         static_cast<const Dtype*>(diff_->cpu_data()),
 9         static_cast<Dtype*>(data_->mutable_cpu_data()));
10     caffe_mul<Dtype>(count_,
11       static_cast<const Dtype*>(mask_->cpu_data()),
12       static_cast<const Dtype*>(data_->cpu_data()),
13       static_cast<Dtype*>(data_->mutable_cpu_data()));
14     break;
15   case SyncedMemory::HEAD_AT_GPU:
16   case SyncedMemory::SYNCED:
17 #ifndef CPU_ONLY
18     // perform computation on GPU
19     caffe_gpu_axpy<Dtype>(count_, Dtype(-1),
20         static_cast<const Dtype*>(diff_->gpu_data()),
21         static_cast<Dtype*>(data_->mutable_gpu_data()));
22     caffe_gpu_mul<Dtype>(count_,
23       static_cast<const Dtype*>(mask_->gpu_data()),
24       static_cast<const Dtype*>(data_->gpu_data()),
25       static_cast<Dtype*>(data_->mutable_gpu_data()));
26 #else
27     NO_GPU;
28 #endif
29     break;
30   default:
31     LOG(FATAL) << "Syncedmem not initialized.";
32   }
33 }

2、为cpu下的计算和gpu下的计算分别添加形如weight[i]*=mask[i];的运算方式。

inner_product_layer.cpp:

 1 void InnerProductLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
 2     const vector<bool>& propagate_down,
 3     const vector<Blob<Dtype>*>& bottom) {
 4   if (this->param_propagate_down_[0]) {
 5     const Dtype* top_diff = top[0]->cpu_diff();
 6     const Dtype* bottom_data = bottom[0]->cpu_data();
 7     // Gradient with respect to weight
 8     Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();
 9     vector<int> weight_shape(2);
10     if (transpose_) {
11       weight_shape[0] = K_;
12       weight_shape[1] = N_;
13     } else {
14       weight_shape[0] = N_;
15       weight_shape[1] = K_;
16     }
17     int count = weight_shape[0]*weight_shape[1];
18     const Dtype* mask = this->blobs_[0]->cpu_mask();
19     for(int j=0;j<count;j++)
20       weight_diff[j]*=mask[j];
21 
22     if (transpose_) {
23       caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans,
24           K_, N_, M_,
25           (Dtype)1., bottom_data, top_diff,
26           (Dtype)1., weight_diff);
27     } else {
28       caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans,
29           N_, K_, M_,
30           (Dtype)1., top_diff, bottom_data,
31           (Dtype)1., weight_diff);
32     }
33   }
34   if (bias_term_ && this->param_propagate_down_[1]) {
35     const Dtype* top_diff = top[0]->cpu_diff();
36     // Gradient with respect to bias
37     caffe_cpu_gemv<Dtype>(CblasTrans, M_, N_, (Dtype)1., top_diff,
38         bias_multiplier_.cpu_data(), (Dtype)1.,
39         this->blobs_[1]->mutable_cpu_diff());
40   }
41   if (propagate_down[0]) {
42     const Dtype* top_diff = top[0]->cpu_diff();
43     // Gradient with respect to bottom data
44     if (transpose_) {
45       caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasTrans,
46           M_, K_, N_,
47           (Dtype)1., top_diff, this->blobs_[0]->cpu_data(),
48           (Dtype)0., bottom[0]->mutable_cpu_diff());
49     } else {
50       caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans,
51           M_, K_, N_,
52           (Dtype)1., top_diff, this->blobs_[0]->cpu_data(),
53           (Dtype)0., bottom[0]->mutable_cpu_diff());
54     }
55   }
56 }

inner_product_layer.cu:

 1 template <typename Dtype>
 2 void InnerProductLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,
 3     const vector<bool>& propagate_down,
 4     const vector<Blob<Dtype>*>& bottom) {
 5   if (this->param_propagate_down_[0]) {
 6     const Dtype* top_diff = top[0]->gpu_diff();
 7     const Dtype* bottom_data = bottom[0]->gpu_data();
 8     vector<int> weight_shape(2);
 9     if (transpose_) {
10       weight_shape[0] = K_;
11       weight_shape[1] = N_;
12     } else {
13       weight_shape[0] = N_;
14       weight_shape[1] = K_;
15     }
16     int count = weight_shape[0]*weight_shape[1];
17     caffe_gpu_mul<Dtype>(count,static_cast<const Dtype*>(this->blobs_[0]->mutable_gpu_diff()),static_cast<const Dtype*>(this->blobs_[0]->gpu_mask()),static_cast<Dtype*>(this->blobs_[0]->mutable_gpu_diff()));
18     Dtype* weight_diff = this->blobs_[0]->mutable_gpu_diff();
19     //for(int j=0;j<count;j++)
20       //weight_diff[j]*=this->masks_[j];
21     // Gradient with respect to weight
22     if (transpose_) {
23       caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans,
24           K_, N_, M_,
25           (Dtype)1., bottom_data, top_diff,
26           (Dtype)1., weight_diff);
27     } else {
28       caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans,
29           N_, K_, M_,
30           (Dtype)1., top_diff, bottom_data,
31           (Dtype)1., weight_diff);
32     }
33   }
34   if (bias_term_ && this->param_propagate_down_[1]) {
35     const Dtype* top_diff = top[0]->gpu_diff();
36     // Gradient with respect to bias
37     caffe_gpu_gemv<Dtype>(CblasTrans, M_, N_, (Dtype)1., top_diff,
38         bias_multiplier_.gpu_data(), (Dtype)1.,
39         this->blobs_[1]->mutable_gpu_diff());
40   }
41   if (propagate_down[0]) {
42     const Dtype* top_diff = top[0]->gpu_diff();
43     // Gradient with respect to bottom data
44     if (transpose_) {
45       caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasTrans,
46           M_, K_, N_,
47           (Dtype)1., top_diff, this->blobs_[0]->gpu_data(),
48           (Dtype)0., bottom[0]->mutable_gpu_diff());
49     } else {
50       caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans,
51           M_, K_, N_,
52          (Dtype)1., top_diff, this->blobs_[0]->gpu_data(),
53          (Dtype)0., bottom[0]->mutable_gpu_diff());
54     }
55   }
56 }

至此修改完毕。

 

另外,caffe在新的版本中已添加sparse_参数,参考 https://github.com/BVLC/caffe/pulls?utf8=%E2%9C%93&q=sparse

posted @ 2017-06-20 21:38  柚芹re  阅读(3455)  评论(4编辑  收藏  举报