What does LIME really see in images?

The performance of modern algorithms on certain computer vision tasks such as

object recognition is now close to that of humans. This success was achieved at

the price of complicated architectures depending on millions of parameters and

it has become quite challenging to understand how particular predictions are

made. Interpretability methods propose to give us this understanding. In this

paper, we study LIME, perhaps one of the most popular. On the theoretical side,

we show that when the number of generated examples is large, LIME explanations

are concentrated around a limit explanation for which we give an explicit

expression. We further this study for elementary shape detectors and linear

models. As a consequence of this analysis, we uncover a connection between LIME

and integrated gradients, another explanation method. More precisely, the LIME

explanations are similar to the sum of integrated gradients over the

superpixels used in the preprocessing step of LIME.

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐