分类算法中精确率、召回率、F1 Score的理解以及代码实践

在机器学习和深度学习中,将分类任务的预测结果分为以下四种,被称作混淆矩阵
True Positive(TP):预测出的为正例,标签值也为正例,预测正确

False Negative(FN):预测出的为负例,标签值为正例,预测错误,即漏报

False Positive(FP):预测出的为正例,标签值为负例,预测错误,即误报

True Negative(TN):预测出的为负例,标签值为负例,预测正确
precision
查准率、精确率=正确预测到的正例数/预测正例总数

precision = TP/(TP+FP)
recall
查全率、召回率=正确预测到的正例数/实际正例总数 ,即真正类率

recall = TP/(TP+FN)
F1 score
F1 score 为精确率与召回率的调和均值

2/F1 = 1/P+1/R

F1 score = 2TP/(2TP+FP+FN)

准确率accuracy和精确率precision都高的情况下,F1 score也会显得很高
.mAP:
数据集的平均准确率
mAP50-95:
mAP阈值为50到mAP阈值为95,间隔5%,取得10个mAP值,然后对这十个值取平均。

点击查看代码
import os
import sys

from pymetric import AllMetrics

def example_1():
  """for regression task
  """
  A = [[0.1, 0.2, 0.2, 0.6], [0.5, 0.1, 0.2, 0.3]]
  B = [[0.7, 0.5, 0.8, 0.7], [0.1, 0.3, 0.7, 0.1]]
  print ('Y_truth', A)
  print ('Y_pred ', B)
  for method in ['abs_error', 'rmse', 'r2', 'ndcg', 'cos']:
    try:
      print (method, AllMetrics.measure(A, B, method))
    except Exception as e:
      print ('Exception: %s' % repr(e))

def example_2():
  """for classification task
  """
  A = [[1, 0, 0, 1, 0, 1], [1, 0, 0, 1, 0, 1]]
  B = [[0, 1, 1, 0, 1, 0], [0.2, 0.3, 0.7, 0.1, 0.3, 0.8]]
  print ('Y_truth', A)
  print ('Y_pred ', B)
  for method in ['precision', 'recall', 'f1_score', 'auc', 'map']:
    try:
      print (method, AllMetrics.measure(A, B, method))
    except Exception as e:
      print ('Exception: %s' % repr(e))


if __name__ == '__main__':
  #example_1()
  example_2()


posted @ 2024-05-15 10:13  阳光天气  阅读(275)  评论(0)    收藏  举报