机器学习改善Interpretability的几个技术

改善机器学习可解释性的技术和方法

尽管透明性和道德问题对于现场的数据科学家来说可能是抽象的,但实际上,可以做一些实际的事情来提高算法的可解释性

算法概括

首先是提高概括性。这听起来很简单,但并非那么简单。当您认为大多数机器学习工程都以非常特定的方式应用算法来发现所需的特定结果时,模型本身可能会感觉像是次要元素-仅仅是达到目的的一种手段。但是,通过改变这种态度来考虑算法的整体运行状况以及运行该算法的数据,您可以开始为改善可解释性奠定坚实的基础。

注意feature importance

这应该很明显,但是很容易错过。仔细研究算法各种feature是一种切实可行的方法,可以实际解决从业务调整到道德等一系列问题。关于如何设置每个feature的辩论和讨论可能会花费一些时间,但是默契地意识到以某种方式设置了不同的feature仍然是迈向可解释性重要一步。

LIME:本地可解释模型不可知的解释

尽管上述技术提供了数据科学家可以采取的实际步骤,但LIME是研究人员开发的一种实际方法,旨在使算法内部发生的事情更加透明。研究人员解释说,LIME可以“通过在预测周围局部学习一个可解释的模型,以一种可解释和忠实的方式解释任何分类器的预测。”

在实践中,这意味着LIME模型通过对其进行测试来观察模型中某些方面发生变化时会发生什么,从而发展出该模型的近似值。本质上,它是关于通过实验过程尝试从相同的输入重新创建输出。

DeepLIFT(深度学习重要功能)

在深度学习特别棘手的领域,DeepLIFT是有用的模型。它通过反向传播的形式起作用:它获取输出,然后尝试通过“读取”已形成原始输出的各种神经元来将其分开。

本质上,这是一种追溯算法内部特征选择的方法(顾名思义)。

逐层相关性传播

逐层相关性传播与DeepLIFT类似,因为它从输出向后工作,识别出神经网络中最相关的神经元,直到您返回到输入(例如,图像)为止。如果您想了解更多有关该概念背后的数学知识,Dan Shiebler的这篇文章是一个很好的起点。

Techniques and methods for improving machine learning interpretability

While questions of transparency and ethics may feel abstract for the data scientist on the ground, there are, in fact, a number of practical things that can be done to improve an algorithm’s interpretability and explainability.

Algorithmic generalization

The first is to improve generalization. This sounds simple, but it isn’t that easy. When you think most machine learning engineering is applying algorithms in a very specific way to uncover a certain desired outcome, the model itself can feel like a secondary element - it’s simply a means to an end. However, by shifting this attitude to consider the overall health of the algorithm, and the data on which it is running, you can begin to set a solid foundation for improved interpretability.

Pay attention to feature importance

This should be obvious, but it’s easily missed. Looking closely at the way the various features of your algorithm have been set is a practical way to actually engage with a diverse range of questions, from business alignment to ethics. Debate and discussion over how each feature should be set might be a little time-consuming, but having that tacit awareness that different features have been set in a certain way is nevertheless an important step in moving towards interpretability and explainability.

LIME: Local Interpretable Model-Agnostic Explanations

While the techniques above offer practical steps that data scientists can take, LIME is an actual method developed by researchers to gain greater transparency on what’s happening inside an algorithm. The researchers explain that LIME can explain “the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction.”

What this means in practice is that the LIME model develops an approximation of the model by testing it out to see what happens when certain aspects within the model are changed. Essentially it’s about trying to recreate the output from the same input through a process of experimentation.

DeepLIFT (Deep Learning Important Features)

DeepLIFT is a useful model in the particularly tricky area of deep learning. It works through a form of backpropagation: it takes the output, then attempts to pull it apart by ‘reading’ the various neurons that have gone into developing that original output.

Essentially, it’s a way of digging back into the feature selection inside of the algorithm (as the name indicates).

Layer-wise relevance propagation

Layer-wise relevance propagation is similar to DeepLIFT, in that it works backwards from the output, identifying the most relevant neurons within the neural network until you return to the input (say, for example, an image). If you want to learn more about the mathematics behind the concept, this post by Dan Shiebler is a great place to begin.

posted on 2019-10-23 15:54  MrCharles在cnblogs  阅读(1362)  评论(0编辑  收藏  举报

导航