Ir para o conteúdo principal
Kit de ferramentas xAI:
aprendizagem automática prática e justificável

Kit de ferramentas xAI: aprendizagem automática prática e justificável

Authors: Andy Flint, Arash Nourian, Jari Koister

White paper

Os modelos de aprendizagem automática (ML) - em comparação com modelos estritamente aditivos - podem fornecer aumento preditivo notável quando os dados apresentam relacionamentos complexos. However, without an understanding of the relationships captured by the ML model, we risk encoding accidental, unintentional and even undesirable features into these predictions. These surprising relationships may be introduced by unexpected biases in our data- collection methods, or by confounding treatments in our historical practices, which, if undetected, could yield models that are unfit for their intended tasks. On the bright side, however, revelations from an ML model's content can inspire greater insights for the model creators. They may also foster greater trust among its users. This paper seeks to explore, illustrate and compare Explainable Artificial Intelligence (xAI) techniques that can help us gain deeper insights from ML models and operationalize them with far greater confidence. Specifically, we outline some of the explainability support for machine learning provided by toolsets available from FICO.