11-21

念念不忘 必有回响

A--K-Means快速聚类

 

皆在理解/探索数据的某些自然结构,商业上最有名的FRM(客户价值评估模型)就是聚类分析的一种数据处理,降维,数据离散压缩,有效的发现最近邻居等。

 

种类

 

1.K-Means:K均值是基于原型的,划分的聚类计数,它试图发现用户指定个数(K)的簇,(由质心代表)

 

2.层次聚类:把数据划分为又层次包含的簇,开始:每个点作为一个单点簇,然后重复的合并两个最相近的簇,知道产生一个包含所有点的簇

 

3.DBSCAN:这是一种产生划分聚类的基于密度的聚类算法,簇的个数由算法自动决定,低密度区域的点被视为噪声而忽略,因此它不产生完全分类

 

K-Means 原理:随机放入K个点,然后计算组内每一个点到这K个点的距离,将距离最近的点划分到K1所代表的簇中,划分成初始的K个簇,然后以K个簇为基础,分别求均值更新为质心,之后再计算所有点到质心的距离,更新点的所属簇,再次更新质心,重复迭代直到质心不在发生变化为止。

In [2]:
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
 
In [3]:
iris = pd.read_csv('iris (1).txt', header=None)
iris.shape
Out[4]:
(150, 5)
In [6]:
iris.head()

Out[6]:

 01234
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In [8]:
iris.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 5 columns):
0    150 non-null float64
1    150 non-null float64
2    150 non-null float64
3    150 non-null float64
4    150 non-null object
dtypes: float64(4), object(1)
memory usage: 5.9+ KB
In [14]:
iris.describe()

 

Out[14]:
 0123
count 150.000000 150.000000 150.000000 150.000000
mean 5.843333 3.054000 3.758667 1.198667
std 0.828066 0.433594 1.764420 0.763161
min 4.300000 2.000000 1.000000 0.100000
25% 5.100000 2.800000 1.600000 0.300000
50% 5.800000 3.000000 4.350000 1.300000
75% 6.400000 3.300000 5.100000 1.800000
max 7.900000 4.400000 6.900000 2.500000
In [7]:
#用于创建初始质心
import random

In [8]:

#寻找每列值数值的范围以便我们再此范围内设置质心
iris_min = iris.iloc[:, :4].min()
iris_max = iris.iloc[:, :4].max()
In [9]:
#计算极值的中间值
iris_mid = (iris_min + iris_max) / 2
iris_mid 
Out[9]:
0    6.10
1    3.20
2    3.95
3    1.30
dtype: float64
In [10]:
#计算极值的半径
iris_ran = (iris_max - iris_min) / 2
iris_ran 
 
Out[10]:
0    1.80
1    1.20
2    2.95
3    1.20
dtype: float64
 

然后采用random进行随机抽样,为了让质心尽可能的分散,所以最好采用numpy.random.random 均匀分布,再【0,1】范围内抽样,首先将其乘以原数据的范围,将其张成一个宽度与原数据相等的数组,然后再抽样结果的每一列加上之前计算极值的中间值,这样数值的每一列相当于再对应列的极值范围内进行抽样,然后进一步生成每一列极值范围内的随机质心。

In [11]:
np.random.seed(1111)
iris_cent_o = np.random.random((2, 4))
iris_cent = (iris_cent_o-0.5)*2*list(iris_ran) + list(iris_mid)
iris_cent 
Out[11]:
array([[4.64397712, 4.22000888, 3.0270832 , 0.84514466],
       [4.30723542, 2.56542734, 2.40297115, 1.8661981 ]])
In [12]:
#自定义一个函数,用于生成随机质心
def randCent(dataSet, k):#K=自定义质心个数
    n = dataSet.shape[1]
    data_min = dataSet.iloc[:, :n-1].min()
    data_max = dataSet.iloc[:, :n-1].max()
    data_mid = (data_min + data_max) / 2
    data_ran = (data_max - data_min) / 2
    data_cent_o = np.random.random((k, n-1))#K行 N列
    data_cent = (data_cent_o-0.5)*2 * list(data_ran) + list(data_mid)
    return data_cent
 

 

In [19]:
iris_min 
Out[19]:
0    4.3
1    2.0
2    1.0
3    0.1
dtype: float64
In [20]:
iris_max

Out[20]:

0    7.9
1    4.4
2    6.9
3    2.5
dtype: float64
In [27]:
a=randCent(iris, 4)
a

Out[27]:

array([[5.57344612, 3.93063365, 1.4068145 , 0.51118202],
       [4.75840507, 3.09249524, 4.6954244 , 0.12158525],
       [5.28014876, 2.55746611, 6.66461261, 2.31072381],
       [7.89651676, 3.30991232, 2.42389736, 0.437418  ]])
In [13]:
#定义距离函数,再不直接运用计算距离计算结果,只比较长短的情况下,直接用距离平方和代替距离,化简开平方,减少计算量
def distEclud(arrayA, arrayB):
    dist_o = arrayA - arrayB
    return np.sum(np.power(dist_o, 2), axis=1)#np.power 依次对数据进行2次方处理
In [23]:
#参数分别为,数据集,K值, 距离函数,随机生成质心函数
def kMeans(dataSet, k, distMeas=distEclud, createCent=randCent):
    m = dataSet.shape[0]
    n = dataSet.shape[1]
    centroids = createCent(dataSet, k)#生成随机质心
    clusterAssment = np.zeros((m,3))#创建3列,第一列存放最短距离,第二列出存放最新的所属簇,第三列存放前次所属簇
    clusterAssment[:, 0] = np.inf
    clusterAssment[:, 1: 3] = -1
    result_set = pd.concat([dataSet, pd.DataFrame(clusterAssment)], axis=1,#将生成的三列与元数据合并
    ignore_index = True)
    clusterChanged = True #直接告诉while函数 执行一下循环
    while clusterChanged:
        clusterChanged = False# 防止出现死循环
        for i in range(m):
            dist = distMeas(dataSet.iloc[i, :n-1].values, centroids)#分别计算与质心的距离
            result_set.iloc[i, n] = dist.min()
            result_set.iloc[i, n+1] = np.where(dist == dist.min())[0]#np.where(q ==q.min())#返回最小值的索引
            clusterChanged = not (result_set.iloc[:, -1] == result_set.iloc[:,
-2]).all()#判断质心是否发生变化,并据此决定是否进行下一次迭代
        if clusterChanged:
            cent_df = result_set.groupby(n+1).mean()#更新质心
            centroids = cent_df.iloc[:,:n-1].values
            result_set.iloc[:, -1] = result_set.iloc[:, -2]
    return centroids, result_set
 
 
 
In [24]:
testSet=pd.read_table("test.txt",header=None)
testSet.head()
 
Out[24]:
 01
0 1.658985 4.285136
1 -3.453687 3.424321
2 4.838138 -1.151539
3 -5.379713 -3.362104
4 0.972564 2.924086
 
In [17]:
 
plt.plot(testSet.iloc[:,0], testSet.iloc[:,1], 'o')

Out[17]:

[<matplotlib.lines.Line2D at 0x2167f1bf208>]
 
In [18]:
#添加一列虚拟标签列
ze = pd.DataFrame(np.zeros(80).reshape(-1, 1))
test_set = pd.concat([testSet, ze], axis=1, ignore_index = True)
test_set.head()
Out[18]:
 012
0 1.658985 4.285136 0.0
1 -3.453687 3.424321 0.0
2 4.838138 -1.151539 0.0
3 -5.379713 -3.362104 0.0
4 0.972564 2.924086 0.0
In [112]:
test_cent, test_cluster =kMeans(test_set,4)
 
In [77]:
test_cluster.head()#查看数据集
Out[77]:
 012345
0 1.658985 4.285136 0.0 2.320192 0.0 0.0
1 -3.453687 3.424321 0.0 1.390049 2.0 2.0
2 4.838138 -1.151539 0.0 6.638391 3.0 3.0
3 -5.379713 -3.362104 0.0 4.161410 1.0 1.0
4 0.972564 2.924086 0.0 2.769678 0.0 0.0
In [76]:
test_cent#查看质心

Out[76]:

array([[ 2.6265299 ,  3.10868015],
       [-3.38237045, -2.9473363 ],
       [-2.46154315,  2.78737555],
       [ 2.80293085, -2.7315146 ]])
In [113]:
#用图形看看效果
plt.scatter(test_cluster.iloc[:,0], test_cluster.iloc[:, 1],c=test_cluster.iloc[:, -1])
plt.plot(test_cent[:, 0], test_cent[:, 1], 'o', color='red')

Out[113]:
[<matplotlib.lines.Line2D at 0x2c24ad94be0>]
 
In [39]:
 result_set = pd.concat([testSet, pd.DataFrame(clusterAssment)], axis=1,
    ignore_index = True)

In [81]:

#测试下一个数据集
iris=pd.read_csv("iris (1).txt",header=None)
iris.head()

Out[81]:

 01234
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In [82]:
iris_cent, iris_result = kMeans(iris, 3)

In [83]:

iris_cent
Out[83]:
array([[5.9016129 , 2.7483871 , 4.39354839, 1.43387097],
       [6.85      , 3.07368421, 5.74210526, 2.07105263],
       [5.006     , 3.418     , 1.464     , 0.244     ]])
In [85]:
iris_result.head()
Out[85]:
 01234567
0 5.1 3.5 1.4 0.2 Iris-setosa 0.021592 2.0 2.0
1 4.9 3.0 1.4 0.2 Iris-setosa 0.191992 2.0 2.0
2 4.7 3.2 1.3 0.2 Iris-setosa 0.169992 2.0 2.0
3 4.6 3.1 1.5 0.2 Iris-setosa 0.269192 2.0 2.0
4 5.0 3.6 1.4 0.2 Iris-setosa 0.039192 2.0 2.0
 

注意:前次数据添加再最后一列添加虚拟标签列,是为了方便我们的自定义函数,鸢尾花数据自带标签,所以不必添加标签列

 

误差平方和(sse)的计算

 

由于我们的距离度量公式是平方和,没有开根号,所以再我们定义的clusterAssment容器中,第一行就是我们的,质心到对应簇的平方,因此我们只需要对这行求和,就能得到SSe

In [86]:
iris_result.iloc[:,5].sum()
Out[86]:
78.94084142614602
In [19]:
 
#定义要K值学习曲线,参数分别为(数据集,聚类模型,K值)
def kcLearningCurve(dataSet, cluster = kMeans, k=10):
    n=dataSet.shape[1]
    SSE=[]
    for i in range(1,k):
        centroids, result_set = cluster(dataSet, i)
        SSE.append(result_set.iloc[:,n].sum())
    plt.plot(range(2,k+1),SSE,"-o")#K值从2开始取值,因为K值取1,并没有意义
    return SSE
In [20]:
kcLearningCurve(test_set, cluster = kMeans, k=10)

Out[20]:

[1465.5800234838161,
 792.9168565373268,
 405.13810196190366,
 150.62604907269227,
 135.454691006784,
 126.96707600844391,
 116.72192330554486,
 93.69437495941462,
 109.7354882708554]
 
 

这里看出质心增加到4-5是效果较好,之后下降数据有所递减,所以可能数据结构显示可能分为4或5个人簇比较好

In [108]:
kcLearningCurve(iris)#用鸢尾花数据看看
Out[108]:
[680.8244,
 152.36870647733906,
 78.94084142614602,
 143.45373548406212,
 78.94506582597731,
 46.55057267267267,
 34.30776223776225,
 57.31787321428571,
 35.042759952465836]
 
In [21]:
#进一步完善学习曲线参数为(数据集,模型,K的选择范围,每个K循环多少次
def kLearningCurve_1(dataSet,cluster=kMeans,k=5,n=3):
    yAc_mean = []
    yAc_up = []
    yAc_down = []
    for i in range(2,k+1):
        SSE=np.array([])
        for j in range(n):
            centroids, result_set = cluster(dataSet,i)
            SSE=np.append(SSE,result_set.iloc[:,n].sum())
        SSE=np.array(SSE)
        yAc_mean.append(SSE.mean())
        yAc_up.append(SSE.mean()+SSE.var())
        yAc_down.append(SSE.mean()-SSE.var())
    plt.plot(range(2, k+1), yAc_mean, '-o',color='black')
    plt.plot(range(2, k+1), yAc_up, '--o',color='red')
    plt.plot(range(2, k+1), yAc_down, '--o',color='red')
    return yAc_mean, yAc_up, yAc_down
#用均值反应集中趋势,用方差表示离中程度,当方差较大时均值的效力将被削弱

 

In [133]:
kLearningCurve_1(test_set)
Out[133]:
([845.2416702845562,
  458.2822115753449,
  150.40213427393698,
  138.74542164204135],
 [3430.3618494349266,
  2170.186701225719,
  150.50240994814024,
  152.46865945895263],
 [-1739.8785088658142,
  -1253.622278075029,
  150.30185859973372,
  125.02218382513007])
 
 

可见K值选4是可能较好,但是K值选2时方差较大,需要进一步处理·3

 

模型收敛稳定性问题

In [117]:
#首先分别采用不同初始质心看看区别
np.random.seed(123) for i in range(1, 5): plt.subplot(2, 2, i) test_cent, test_cluster = kMeans(test_set, 3) plt.scatter(test_cluster.iloc[:,0], test_cluster.iloc[:, 1], c=test_cluster.iloc[:, -1]) plt.plot(test_cent[:, 0], test_cent[:, 1], 'o', color='red') print(test_cluster.iloc[:, 3].sum())
463.6496809222772
506.0588518418539
405.13810196190366
405.13810196190366
 
In [118]:
np.random.seed(123)
for i in range(1, 5):
    plt.subplot(2, 2, i)
    test_cent, test_cluster = kMeans(test_set, 5)
    plt.scatter(test_cluster.iloc[:,0], test_cluster.iloc[:, 1],
    c=test_cluster.iloc[:, -1])
    plt.plot(test_cent[:, 0], test_cent[:, 1], 'o', color='red')
    print(test_cluster.iloc[:, 3].sum())

 

132.51859467191917
134.72812560240237
134.4631443966685
131.89160568812764
 
In [119]:
np.random.seed(123)
for i in range(1, 5):
    plt.subplot(2, 2, i)
    test_cent, test_cluster = kMeans(test_set, 4)
    plt.scatter(test_cluster.iloc[:,0], test_cluster.iloc[:, 1],
    c=test_cluster.iloc[:, -1])
    plt.plot(test_cent[:, 0], test_cent[:, 1], 'o', color='red')
    print(test_cluster.iloc[:, 3].sum())
438.99925511275205
149.95430467642635
149.95430467642635
150.62604907269227
 
 

由以上三个结果可以看出1:初始质心的选择多对最终结果是有影响的,而影响程度有与初始质心的数量有关,如果质心的选取与数据的空间集中分布情况越相似,影响程度越小,也可能和质心生成方式高度相关

 

为了降低误差,我们有2分K均值法,与K-means++,2者都是为了降低误差而设计的方法,均能得到比较稳定的输出结果,也可以一个K多次执行,看结果是否稳定

 

2分K均值法原理:

 

首先对原数据集进行二分K—means聚类,得到两个簇(A,B),之后对A进行二分类,得到(A1,A2),现在我们得到(A1,A2,B)三个簇,计算SSE_A,然后还原A,并对B进行分类,得到(B1,B2),计算(A,B1,B2)分类的SSE_B,假设SSE_B<SSE_A,那么就维持(A,B1,B2)分类方案,之后依次对A,B1,B2,分类,重复以上过程,直到我们分类出我设定的K个簇,迭代停止。

 

K_means++ 参考文献连接:http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf

 

2分K均值实现

In [69]:

#输入一个辅助函数用于再给定中心点的情况下判断各数据集的各点划分归属情况,参数为,数据集,中心点 和距离计算公式
def kMeans_assment(dataSet, centroids, distMeas = distEclud):
    m = dataSet.shape[0]
    n = dataSet.shape[1]
    clusterAssment = np.zeros((m,3))
    clusterAssment[:, 0] = np.inf
    clusterAssment[:, 1: 3] = -1
    result_set = pd.concat([dataSet, pd.DataFrame(clusterAssment)], axis=1,
    ignore_index = True)
    for i in range(m):
        dist = distMeas(dataSet.iloc[i, :n-1].values, centroids)
        result_set.iloc[i, n] = dist.min()
        result_set.iloc[i, n+1] = np.where(dist == dist.min())[0]
        result_set.iloc[:, -1] = result_set.iloc[:, -2]
    return result_set
In [72]:
#构建一个二分均值自定义函数,参数为  数据集  K值   距离计算公式
def biKmeans(dataSet, k, distMeas = distEclud):
    m = dataSet.shape[0]
    n = dataSet.shape[1]
    centroids, result_set = kMeans(dataSet, 2)
    j = 2
    while j < k:
        result_tmp = result_set.groupby(n+1).sum()
        clusterAssment = pd.concat([pd.DataFrame(centroids),
    result_tmp.iloc[:,n]], axis = 1, ignore_index = True)
        lowestSSE = clusterAssment.iloc[:, n-1].sum()
        centList = []
        sseTotle = np.array([])
        for i in clusterAssment.index:
            df_temp = result_set.iloc[:, :n][result_set.iloc[:, -1] == i]
            df_temp.index = range(df_temp.shape[0])
            cent, res = kMeans(df_temp, 2, distMeas)
            centList.append(cent)
            sseSplit = res.iloc[:, n].sum()
            sseNotSplit = result_set.iloc[:, n][result_set.iloc[:, -1] !=i].sum()
            sseTotle = np.append(sseTotle, sseSplit + sseNotSplit)
        min_index = np.where(sseTotle == sseTotle.min())[0][0]
        clusterAssment = clusterAssment.drop([min_index])
        centroids = np.vstack([clusterAssment.iloc[:, :n-1].values,centList[min_index]])
        result_set = kMeans_assment(dataSet, centroids)
        j = j + 1
    return centroids, result_set

 

In [74]:
#尝试是否能正常运行
test_cent, test_cluster = biKmeans(test_set, 4)
plt.scatter(test_cluster.iloc[:,0], test_cluster.iloc[:, 1],
c=test_cluster.iloc[:, -1])
plt.plot(test_cent[:, 0], test_cent[:, 1], 'o', color='red')
Out[74]:
[<matplotlib.lines.Line2D at 0x216016b2fd0>]
 
In [76]:
np.random.seed(123)
sseList = []
for i in range(10):
    test_cent, test_cluster = biKmeans(test_set, 3)
    print(test_cluster.iloc[:, 3].sum())
    sseList.append(test_cluster.iloc[:, 3].sum())
473.01518093904485
408.03346178326774
473.01518093904485
473.01518093904485
473.01518093904485
406.45835984876476
473.01518093904485
408.03346178326774
473.01518093904485
473.01518093904485
In [77]:
np.random.seed(123)
sseList = []
for i in range(10):
    test_cent, test_cluster = biKmeans(test_set, 3)
    print(test_cluster.iloc[:, 3].sum())
    sseList.append(test_cluster.iloc[:, 3].sum())
473.01518093904485
408.03346178326774
473.01518093904485
473.01518093904485
473.01518093904485
406.45835984876476
473.01518093904485
408.03346178326774
473.01518093904485
473.01518093904485
 

轮廓系数

 

由凝聚度和分离度共同构成的轮廓系数是除SSE之外,最重要的衡量聚类模型有效性的指标

 

凝聚度定义为:关于簇原型(质心)的邻近度的和,当距离公式为欧几里得是,凝聚度就是该簇的SSE

 

分离度定义为:两个簇原型(质心)之间的邻进性度量,也就是每两个质心之间的距离和,假设有三个质心(A,B,C)那么凝聚度=(A-B)2+(A-C)2+(B-C)**2,同时该结果与每个质心分别到整体质心的距离平方和相等,所以该值等价簇间误差平方和SSB

 

又因为离差平方和TSS=SSE+SSB,所以要最小化SSE,也就等价与要最大化SSB,也即是说SSB越大越好

 

轮廓系数记作si=(bi-ai)/max(ai,bi)

其中:对于第i个对象,计算它到簇中所有其他对象的平均距离,该值记作ai。 ---对与第i个对象和不包含该对象的任意簇,计算该对象到给定簇中所有对象的平均距离,关于所有的簇,找出最小值,该值记作bi

 

轮廓系数再-1与1之间变化,首先我们不希望出现负值,因为这样证明ai>bi,而我们希望ai<bi,并且ai月接近0越好,因为当ai取0时,轮廓系数取最大值1,如果想看整体的轮廓系数,需要计算所有点的轮廓系数然后算均值,就得到了聚类优良性的总度量

 

python实现

In [82]:
#对数据集执行4分类
centroids, result_set =biKmeans(test_set, 4)
result_set.head()

Out[82]:

 012345
0 1.658985 4.285136 0.0 2.320192 0.0 0.0
1 -3.453687 3.424321 0.0 1.261352 2.0 2.0
2 4.838138 -1.151539 0.0 6.638391 1.0 1.0
3 -5.379713 -3.362104 0.0 3.604773 3.0 3.0
4 0.972564 2.924086 0.0 2.769678 0.0 0.0
In [80]:
#确认分类效果
2
plt.scatter(result_set.iloc[:,0], result_set.iloc[:, 1],
c=result_set.iloc[:, -1])
plt.plot(centroids[:, 0], centroids[:, 1], 'o', color='red')
 
Out[80]:
[<matplotlib.lines.Line2D at 0x2160179e390>]
 
In [47]:
#首先再元数据集合增加质心个数的列,每列保留数据为对应点到各簇的点的均值,如果又两个质心,就添加两列,第I点对应的两列数据分别为点I到簇1和簇2个点的均值    
m, n = result_set.shape
nc = len(centroids)
for i in range(nc):
    result_set[n+i]=0
result_list = []
for i in range(nc):#把每一个簇单独保存成一个DataFrame,然后存为list中的元素,这样list中每一个元素都代表对应的数据集
    result_temp=result_set[result_set.iloc[:, n-1] == i]
    result_temp.index = range(result_temp.shape[0])
    result_list.append(result_temp)
for i in range(m):#循环行的个数
    for j in range(nc):#对每个簇进行计算
        result_set.iloc[i,n+j]=distEclud(result_set.iloc[i, :n-4].values,#取前两列
        result_list[j].iloc[:, :n-4].values).mean()
 
 
 
In [49]:

result_set.head()#如表:6 7 8 9 列分别是 点I到簇0,簇1,簇2,簇3,的均值,通过4,5 列可知第一行数据是属于簇2,所以第8列最小
 

Out[49]:

 0123456789
0 1.658985 4.285136 0.0 2.320192 2.0 2.0 53.091309 20.724456 4.136116 79.353982
1 -3.453687 3.424321 0.0 1.390049 1.0 1.0 79.588890 2.892466 38.884591 42.233165
2 4.838138 -1.151539 0.0 6.638391 0.0 0.0 9.187702 70.302810 24.856602 72.431709
3 -5.379713 -3.362104 0.0 4.161410 3.0 3.0 69.902615 47.834231 107.786897 5.791472
4 0.972564 2.924086 0.0 2.769678 2.0 2.0 37.885372 13.314199 4.585602 55.069116
In [50]:
#又添加两列分别代表ai和bi
result_set["a"]=0
result_set["b"]=0
for i in range(m):
    l_temp=[]
    for j in range(nc):
        if(result_set.iloc[i,n-1] == j):
            result_set.loc[i,"a"] = result_set.iloc[i, n+j]
        else:
            l_temp.append(result_set.iloc[i, n+j])
    result_set.loc[i,"b"] = np.array(l_temp).min()
In [51]:
result_set.head()

Out[51]:

 0123456789ab
0 1.658985 4.285136 0.0 2.320192 2.0 2.0 53.091309 20.724456 4.136116 79.353982 4.136116 20.724456
1 -3.453687 3.424321 0.0 1.390049 1.0 1.0 79.588890 2.892466 38.884591 42.233165 2.892466 38.884591
2 4.838138 -1.151539 0.0 6.638391 0.0 0.0 9.187702 70.302810 24.856602 72.431709 9.187702 24.856602
3 -5.379713 -3.362104 0.0 4.161410 3.0 3.0 69.902615 47.834231 107.786897 5.791472 5.791472 47.834231
4 0.972564 2.924086 0.0 2.769678 2.0 2.0 37.885372 13.314199 4.585602 55.069116 4.585602 13.314199
In [52]:
result_set["s"] = (result_set.loc[:,"b"]-result_set.loc[:,"a"])/result_set.loc[:,"a":"b"].max(axis=1)#按照行计算
result_set["s"].mean()#一般0.7往上证明结果还不错
Out[52]:
0.8496910003043098
In [53]:
result_set.head()

Out[53]:

 0123456789abs
0 1.658985 4.285136 0.0 2.320192 2.0 2.0 53.091309 20.724456 4.136116 79.353982 4.136116 20.724456 0.800423
1 -3.453687 3.424321 0.0 1.390049 1.0 1.0 79.588890 2.892466 38.884591 42.233165 2.892466 38.884591 0.925614
2 4.838138 -1.151539 0.0 6.638391 0.0 0.0 9.187702 70.302810 24.856602 72.431709 9.187702 24.856602 0.630372
3 -5.379713 -3.362104 0.0 4.161410 3.0 3.0 69.902615 47.834231 107.786897 5.791472 5.791472 47.834231 0.878926
4 0.972564 2.924086 0.0 2.769678 2.0 2.0 37.885372 13.314199 4.585602 55.069116 4.585602 13.314199 0.655586
In [57]:
#写一个求轮廓系数的自定义函数
def silhouetteCoe(result_set):
    m, n = result_set.shape
    nc = len(centroids)
    for i in range(nc):
        result_set[n+i]=0
        result_list = []
    for i in range(nc):
        result_temp = result_set[result_set.iloc[:, n-1] == i]
        result_temp.index = range(result_temp.shape[0])
        result_list.append(result_temp)
    for i in range(m):
        for j in range(nc):
                result_set.iloc[i,n+j]=distEclud(result_set.iloc[i, :n-4].values, result_list[j].iloc[:, :n-4].values).mean()
        result_set["a"]=0
        result_set["b"]=0
    for i in range(m):
        l_temp=[]
        for j in range(nc):
            if(result_set.iloc[i,n-1] == j):
                result_set.loc[i,"a"] = result_set.iloc[i, n+j]
            else:
                l_temp.append(result_set.iloc[i, n+j])
        result_set.loc[i,"b"] = np.array(l_temp).max()
    result_set["s"] = (result_set.loc[:,"b"]-result_set.loc[:,"a"])/result_set.loc[:,"a":"b"].max(axis=1)
    return result_set["s"].mean()

In [83]:

sil = []
for i in range(1, 7):
    centroids, result_set =biKmeans(test_set, i+1)
    sil.append(silhouetteCoe(result_set))
plt.plot(range(2, 8), sil, '--o')
 
Out[83]:
[<matplotlib.lines.Line2D at 0x21601923080>]
 
In [84]:
#用鸢尾花数据测试
sil = []
for i in range(1, 7):
    centroids, result_set = biKmeans(iris, i+1)
    sil.append(silhouetteCoe(result_set))
plt.plot(range(2, 8), sil, '--o')
Out[84]:
[<matplotlib.lines.Line2D at 0x216019f70b8>]
 
 

K-Means的Scikit-Learn实现

In [65]:
#和KNN一样,需要输入数据特征组组成的numpy'数组
kmeans_set = testSet.values
plt.plot(kmeans_set[:,0], kmeans_set[:,1], 'o')
Out[65]:
[<matplotlib.lines.Line2D at 0x2167f2e0a20>]
 
In [66]:
from sklearn.cluster import KMeans
n_cluster = 4
kmeans = KMeans(n_cluster)
kmeans.fit(kmeans_set)
Out[66]:
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
    n_clusters=4, n_init=10, n_jobs=1, precompute_distances='auto',
    random_state=None, tol=0.0001, verbose=0)
In [67]:
test_cluster = kmeans.predict(kmeans_set)
test_cent = kmeans.cluster_centers_
test_cent

Out[67]:

array([[-2.46154315,  2.78737555],
       [ 2.80293085, -2.7315146 ],
       [-3.38237045, -2.9473363 ],
       [ 2.6265299 ,  3.10868015]])
In [68]:
plt.scatter(kmeans_set[:, 0], kmeans_set[:, 1], c = test_cluster)
plt.plot(test_cent[:, 0], test_cent[:, 1], 'o', color='red')
Out[68]:
[<matplotlib.lines.Line2D at 0x216013f2128>]
 
In [ ]:
 
 
 
 
 
1
 
 
 

posted on 2019-11-22 13:01  11-21  阅读(401)  评论(0编辑  收藏  举报

导航