天天看點

kmeans python代碼_python實作kMeans算法的詳解

聚類是一種無監督的學習,将相似的對象放到同一簇中,有點像是全自動分類,簇内的對象越相似,簇間的對象差别越大,則聚類效果越好。本文主要為大家詳細介紹了python實作kMeans算法,具有一定的參考價值,感興趣的小夥伴們可以參考一下,希望能幫助到大家。

1、k均值聚類算法

k均值聚類将資料分為k個簇,每個簇通過其質心,即簇中所有點的中心來描述。首先随機确定k個初始點作為質心,然後将資料集配置設定到距離最近的簇中。然後将每個簇的質心更新為所有資料集的平均值。然後再進行第二次劃分資料集,直到聚類結果不再變化為止。

僞代碼為

随機建立k個簇質心

當任意一個點的簇配置設定發生改變時:

對資料集中的每個資料點:

對每個質心:

計算資料集到質心的距離

将資料集配置設定到最近距離質心對應的簇

對每一個簇,計算簇中所有點的均值并将均值作為質心

python實作

import numpy as np

import matplotlib.pyplot as plt

def loadDataSet(fileName):

dataMat = []

with open(fileName) as f:

for line in f.readlines():

line = line.strip().split('\t')

dataMat.append(line)

dataMat = np.array(dataMat).astype(np.float32)

return dataMat

def distEclud(vecA,vecB):

return np.sqrt(np.sum(np.power((vecA-vecB),2)))

def randCent(dataSet,k):

m = np.shape(dataSet)[1]

center = np.mat(np.ones((k,m)))

for i in range(m):

centmin = min(dataSet[:,i])

centmax = max(dataSet[:,i])

center[:,i] = centmin + (centmax - centmin) * np.random.rand(k,1)

return center

def kMeans(dataSet,k,distMeans = distEclud,createCent = randCent):

m = np.shape(dataSet)[0]

clusterAssment = np.mat(np.zeros((m,2)))

centroids = createCent(dataSet,k)

clusterChanged = True

while clusterChanged:

clusterChanged = False

for i in range(m):

minDist = np.inf

minIndex = -1

for j in range(k):

distJI = distMeans(dataSet[i,:],centroids[j,:])

if distJI < minDist:

minDist = distJI

minIndex = j

if clusterAssment[i,0] != minIndex:

clusterChanged = True

clusterAssment[i,:] = minIndex,minDist**2

for cent in range(k):

ptsInClust = dataSet[np.nonzero(clusterAssment[:,0].A == cent)[0]]

centroids[cent,:] = np.mean(ptsInClust,axis = 0)

return centroids,clusterAssment

data = loadDataSet('testSet.txt')

muCentroids, clusterAssing = kMeans(data,4)

fig = plt.figure(0)

ax = fig.add_subplot(111)

ax.scatter(data[:,0],data[:,1],c = clusterAssing[:,0].A)

plt.show()

print(clusterAssing)

2、二分k均值算法

K均值算法可能會收斂到局部最小值,而非全局最小。一種用于度量聚類效果的名額為誤差平方和(SSE)。因為取了平方,更加重視原理中心的點。為了克服k均值算法可能會收斂到局部最小值的問題,有人提出來二分k均值算法。

首先将所有點作為一個簇,然後将該簇一分為二,然後選擇所有簇中對其劃分能夠最大程度減低SSE的值的簇,直到滿足指定簇數為止。

僞代碼

将所有點看成一個簇

計算SSE

while 當簇數目小于k時:

for 每一個簇:

計算總誤差

在給定的簇上進行k均值聚類(k=2)

計算将該簇一分為二的總誤差

選擇使得誤差最小的那個簇進行劃分操作

python實作

import numpy as np

import matplotlib.pyplot as plt

def loadDataSet(fileName):

dataMat = []

with open(fileName) as f:

for line in f.readlines():

line = line.strip().split('\t')

dataMat.append(line)

dataMat = np.array(dataMat).astype(np.float32)

return dataMat

def distEclud(vecA,vecB):

return np.sqrt(np.sum(np.power((vecA-vecB),2)))

def randCent(dataSet,k):

m = np.shape(dataSet)[1]

center = np.mat(np.ones((k,m)))

for i in range(m):

centmin = min(dataSet[:,i])

centmax = max(dataSet[:,i])

center[:,i] = centmin + (centmax - centmin) * np.random.rand(k,1)

return center

def kMeans(dataSet,k,distMeans = distEclud,createCent = randCent):

m = np.shape(dataSet)[0]

clusterAssment = np.mat(np.zeros((m,2)))

centroids = createCent(dataSet,k)

clusterChanged = True

while clusterChanged:

clusterChanged = False

for i in range(m):

minDist = np.inf

minIndex = -1

for j in range(k):

distJI = distMeans(dataSet[i,:],centroids[j,:])

if distJI < minDist:

minDist = distJI

minIndex = j

if clusterAssment[i,0] != minIndex:

clusterChanged = True

clusterAssment[i,:] = minIndex,minDist**2

for cent in range(k):

ptsInClust = dataSet[np.nonzero(clusterAssment[:,0].A == cent)[0]]

centroids[cent,:] = np.mean(ptsInClust,axis = 0)

return centroids,clusterAssment

def biKmeans(dataSet,k,distMeans = distEclud):

m = np.shape(dataSet)[0]

clusterAssment = np.mat(np.zeros((m,2)))

centroid0 = np.mean(dataSet,axis=0).tolist()

centList = [centroid0]

for j in range(m):

clusterAssment[j,1] = distMeans(dataSet[j,:],np.mat(centroid0))**2

while (len(centList)

lowestSSE = np.inf

for i in range(len(centList)):

ptsInCurrCluster = dataSet[np.nonzero(clusterAssment[:,0].A == i)[0],:]

centroidMat,splitClustAss = kMeans(ptsInCurrCluster,2,distMeans)

sseSplit = np.sum(splitClustAss[:,1])

sseNotSplit = np.sum(clusterAssment[np.nonzero(clusterAssment[:,0].A != i)[0],1])

if (sseSplit + sseNotSplit) < lowestSSE:

bestCentToSplit = i

bestNewCents = centroidMat.copy()

bestClustAss = splitClustAss.copy()

lowestSSE = sseSplit + sseNotSplit

print('the best cent to split is ',bestCentToSplit)

# print('the len of the bestClust')

bestClustAss[np.nonzero(bestClustAss[:,0].A == 1)[0],0] = len(centList)

bestClustAss[np.nonzero(bestClustAss[:,0].A == 0)[0],0] = bestCentToSplit

clusterAssment[np.nonzero(clusterAssment[:,0].A == bestCentToSplit)[0],:] = bestClustAss.copy()

centList[bestCentToSplit] = bestNewCents[0,:].tolist()[0]

centList.append(bestNewCents[1,:].tolist()[0])

return np.mat(centList),clusterAssment

data = loadDataSet('testSet2.txt')

muCentroids, clusterAssing = biKmeans(data,3)

fig = plt.figure(0)

ax = fig.add_subplot(111)

ax.scatter(data[:,0],data[:,1],c = clusterAssing[:,0].A,cmap=plt.cm.Paired)

ax.scatter(muCentroids[:,0],muCentroids[:,1])

plt.show()

print(clusterAssing)

print(muCentroids)

代碼及資料集下載下傳:K-means

相關推薦:

以上就是python實作kMeans算法的詳解的詳細内容,更多請關注php中文網其它相關文章!

kmeans python代碼_python實作kMeans算法的詳解

本文原創釋出php中文網,轉載請注明出處,感謝您的尊重!