The General Data Protection Regulation (GDPR) is a wide-ranging and complex regulation intended to strengthen and unify data protection for all individuals within the European Union (EU). A year ago I blogged about the data governance ramifications of GDPR, and in this blog I’ll focus on another facet of GDPR to talk about a related analytics topic: explainable artificial intelligence (AI).
First, let’s start with GDPR. Article 22 of GDPR, “Automated individual decision-making, including profiling,” concerns the use of data in decision-making that affects individuals, such as a person applying for a loan. The regulation says:
“1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
Point 2 of Article 22 describes exclusions (including situations involving the person’s explicit consent, such as applying for a loan), but the key issue for our discussion here is in point 3:
“…the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”
(Are you still with me? We’re almost there.)
This excellent discussion of Article 22, by the European law firm Fieldfisher, drills into the topic further: “In particular, the controller must allow for a human intervention and the right for individuals to express their point of view, to obtain further information about the decision that has been reached on the basis of this automated processing, and the right to contest this decision.”
In risk applications, and with Article 22 of GDPR, customers need to have clear-cut reasons for how they were adversely impacted by a decision. Where the decision is driven by the model, the model needs to point clearly to the drivers of the negative scores. Since most credit decision models are scorecard-based, the answer to that particular question (“Why wasn’t I approved for this loan?”) is straightforward.
But what happens when your model was built with AI? As I discussed in a recent post, AI is a very useful tool for enhancing credit risk scorecards. But explainability is not AI’s strong suit — hence its reputation as a “black box” technology.
When we get into issues like discrimination in the Digital Single Market, a planned sector of the European Single Market that covers digital marketing, e-commerce and telecommunication, the potential for discrimination against individuals (based on factors such as geographic location) is very real, and the propensity to use AI in decision-making in this domain is much greater.
As a hypothetical example, consider how offers for mobile phone service plans are calculated, and to whom they are offered. If a consumer has been adversely affected by an AI-driven decision model and asks, “Why wasn’t I offered X mobile service at Y rate?” an answer of “That’s what the model says” doesn’t cut it.
What we need is Explainable AI. In my next post, I’ll look at some of the leading approaches for breaking AI out of the black box.