访问次数
我的另一个总结性博客: todototry 大米粒

感知机 python 代码实现 ----- 统计学习方法

感知机 python 代码实现  ----- 统计学习方法

参考: http://shpshao.blog.51cto.com/1931202/1119113

 

 

 1 #!/usr/bin/env python
 2 # -*- coding: utf-8 -*-
 3 #
 4 #  未命名.py
 5 #  
 6 #  Copyright 2013 t-dofan <t-dofan@T-DOFAN-PC>
 7 #  
 8 #  This program is free software; you can redistribute it and/or modify
 9 #  it under the terms of the GNU General Public License as published by
10 #  the Free Software Foundation; either version 2 of the License, or
11 #  (at your option) any later version.
12 #  
13 #  This program is distributed in the hope that it will be useful,
14 #  but WITHOUT ANY WARRANTY; without even the implied warranty of
15 #  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
16 #  GNU General Public License for more details.
17 #  
18 #  You should have received a copy of the GNU General Public License
19 #  along with this program; if not, write to the Free Software
20 #  Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
21 #  MA 02110-1301, USA.
22 #  
23 #  
24 
25 class Perceptron:
26     #初始化
27     def __init__(self,learnrate,w0,w1,b):
28         self.learnrate = learnrate
29         self.w0 = w0
30         self.w1 = w1
31         self.b = b
32         
33     #模型
34     def model(self,x):
35         result = x[2]*(self.w0*x[0] + self.w1*x[1] + self.b)
36         return result
37 
38     #策略
39     def iserror(self,x):
40         result = self.model(x)
41         if result <= 0:
42             return True
43         else:
44             return False
45             
46     ##算法 ---> 这里的learnrate 代表。。。。。。。。。。。
47     #调整策略: Wi = Wi + n*wixi
48     def gradientdescent(self,x):
49         self.w0 = self.w0 + self.learnrate * x[2] * x[0] #根据调整策略,此处是否需要*x[2] ? 
50         self.w1 = self.w1 + self.learnrate * x[2] * x[1]
51         self.b = self.b + self.learnrate * x[2]
52 
53 
54     #训练
55     def traindata(self,data):
56         times = 0
57         done = False
58         while not done:
59             for i in range(0,6):
60                 if self.iserror(data[i]):
61                     self.gradientdescent(data[i])
62                     times += 1
63                     done = False
64                     break
65                 else:
66                     done = True    
67         print times
68         print "rightParams:w0:%d,w1:%d,b:%d" %(self.w0 , self.w1 , self.b)
69 
70     def testmodel(self,x):
71         result  = self.w0*x[0] + self.w1*x[1] + self.b
72         if result > 0:
73             return 1
74         else:
75             return -1
76 
77 
78 def main():
79     p = Perceptron(1,0,0,0)
80     data = [[3,3,1],[4,3,1],[1,1,-1],[2,2,-1],[5,4,1],[1,3,-1]] 
81     testdata = [[4,4,-1],[1,2,-1],[1,4,-1],[3,2,-1],[5,5,1],[5,1,1],[5,2,1]]
82     p.traindata(data)
83     for i in testdata:
84         print "%d  %d  %d" %(i[0],i[1],p.testmodel(i))
85         
86     return 0
87 
88 if __name__ == '__main__':
89     main()

 

仍有几处疑问, 书上的调整策略为:Wi = Wi + nYi*Xi, 因此在优化过程中是否有必要乘以x[2] ?

--------------------------------

看错了,此时x[2] 即为yi

 

 

 

posted @ 2013-02-27 10:47  fandyst  阅读(658)  评论(0编辑  收藏  举报