使用RANSAC的直线拟合算法
RANSAC 算法
简介
随机样本共识(RANSAC)是一种迭代方法,可从一组包含离群值的观察数据中估算数学模型的参数,当不对离群值施加影响时,离群值不受影响。因此,它也可以解释为异常检测方法。[1]从某种意义上说,它是非确定性算法,它仅以一定的概率产生合理的结果,并且随着允许更多的迭代,这种概率会增加。该算法由Fischler和Bolles于1981年在SRI International上首次发布。他们使用RANSAC解决了位置确定问题(LDP),其目的是确定空间上的点,这些点投射到图像上并成为一组地标。已知位置。
一个基本的假设是,数据由“内部”组成,即尽管分布可能受噪声影响,但可以通过某些模型参数集解释其分布的数据,以及不适合模型的数据“外部”。异常值可能来自例如噪声的极值,错误的测量结果或关于数据解释的错误假设。 RANSAC还假定,给定(通常是很小)一组inlier,存在一种可以估算模型参数的程序,该模型可以最佳地解释或拟合该数据。
RANSAC 思想伪代码
Given:
data – 数据集
model – 目标生成模型
n – Minimum number of data points required to estimate model parameters.
k – 最大迭代次数
t – 判断模型足够好的阈值
d – 最少的邻近点数量
Return:
bestFit – 最好的模型(或者因为没有结果而返回NULL)
iterations = 0
bestFit = nul
bestErr = something really large
while iterations < k do
maybeInliers := n randomly selected values from data
maybeModel := model parameters fitted to maybeInliers
alsoInliers := empty set
for every point in data not in maybeInliers do
if point fits maybeModel with an error smaller than t
add point to alsoInliers
end for
if the number of elements in alsoInliers is > d then
// 这个if并表明我们或许得到了一个好模型
// 现在看看有多好
betterModel := model parameters fitted to all points in maybeInliers and alsoInliers
thisErr := a measure of how well betterModel fits these points
if thisErr < bestErr then
bestFit := betterModel
bestErr := thisErr
end if
end if
increment iterations
end while
return bestFit
[引用] 以上两部分完全摘自 维基百科: 只是为了阅读方便,进行了翻译并且粘贴在此
使用RANSAC的直线拟合算法
def ransacLine(points, iter=100, good_len=300, sample_points=10):
'''
ransacLine is to make robust prediction of a straight line out of a point set
the algorithm is referred on wikipedia, check it out!
[Input]
points: 2-D points list
[Parameters]
iterations: more iterations might come out better model at the expense of more time consumption
good_len: how many points around do you think is good enough to further
sample point: how many points randomly choose at first
Author: penway from cnblogs.com/penway
'''
bestLoss = 99999999999
for i in range(0, iter):
# choose maybeInliners
maybeInliners = random.sample(points, sample_points)
# make maybeModel
maybeModel = cv.fitLine(np.asanyarray(maybeInliners), 1, 0.1, 0.01, 0.01).tolist()
# maybeModel: [dx, dy, x0, y0]
# maybeFunc = ax + by + c = [a, b, c] = [dy, -dx, dx*y0 - dy*x0]
maybeFunc = [maybeModel[1][0], -maybeModel[0][0], maybeModel[0][0]*maybeModel[3][0] - maybeModel[1][0]*maybeModel[2][0]]
alsoInliners = []
for i in points:
if not i in maybeInliners:
# judge if near the line
if i[0]*maybeFunc[0] + i[1]*maybeFunc[1] + maybeFunc[2] < 4:
alsoInliners.append(i)
if len(alsoInliners) > good_len:
# This implies that we may have found a good model, now test how good it is
maybeInliners.extend(alsoInliners)
betterModel = cv.fitLine(np.asanyarray(maybeInliners), 1, 0.1, 0.01, 0.01).tolist()
betterFunc = [betterModel[1][0], -betterModel[0][0], betterModel[0][0]*betterModel[3][0] - betterModel[1][0]*betterModel[2][0]]
thisLoss = 0
for i in maybeInliners: # use L2 as the loss function
thisLoss += (i[0]*betterFunc[0] + i[1]*betterFunc[1] + betterFunc[2]) * (i[0]*betterFunc[0] + i[1]*betterFunc[1] + betterFunc[2])
if thisLoss <= bestLoss:
bestModel = betterModel
bestFunc = betterFunc
bestLoss = thisLoss
return bestModel