Decision Management
For the past couple of years artificial intelligence (AI) has been the enfant terrible of the business world, a technology full of unconventional and controversial behavior that has shocked, provoked and enchanted audiences worldwide. In 2020, that’s all going to change. In 2020, I predict that AI is going to grow up, encountering new demands in the areas of responsibility, advocacy and regulation.
Beyond Ethical AI: Responsible AI
For the past few years I’ve been working hard on new data science patents, pushing AI technology to be more defensive, explainable and ethical. In 2020, driven by the ever-rising onslaught of new AI applications—coupled with the fact regulation around AI explainability, transparence and ethics is still emerging—there will be higher expectations for responsible AI systems.
Medical devices provide an obvious analog. A medical device such as a heart pacemaker that is rushed to market may be poorly or negligently designed. If people using that device are harmed, there would be liability; the company providing it could be sued by individuals or groups if a lack of rigor and/or reasonable effort was proven.
Along those lines, there will be a more punitive response for companies that consider explainable, ethical AI to be optional. “Oops! We’ve made a mistake with an algorithm and it’s having a harmful effect,” will no longer be interesting news stories of AI gone rogue, but instead a call to action.
I predict that in 2020, AI insurance will become available; companies will look to insure their AI algorithms from liability lawsuits. Using blockchain or other means for auditable model development and model governance will become essential in demonstrating due diligence in building, explaining and testing AI models.
AI advocacy groups will fight back
What effects of AI may be harmful? Think about someone being denied rightful entry to a country due to inaccurate facial recognition. Or misdiagnosed by disease-seeking robotic technology. Or denied access to a loan because a new type of credit score on non-causal features rates them poorly. Or being incorrectly blamed as the cause of an auto accident by the insurance company mobile app loaded onto their phone. What rights do humans have when AI have done them wrong?
Already there is all manner of ways that people are treated unfairly in our society, with advocacy groups to match. With AI advocacy, there may be a different construct, because consumers and their advocacy groups will demand access to the information and process on which the AI system made its decision. AI advocacy will provide empowerment, but it also may drive significant debates between AI experts to interpret and triage the data, model development process, and implementation.
AI advocacy is a radical idea, but a similar concept has already been in place with the FICO® Score; for more than 30 years consumers have had access to their FICO Score and the credit bureau reports it informs. If someone is denied credit, the would-be issuer is required to provide an explanation as to why; if a consumer feels that an item(s) on a credit bureau report is inaccurate they can file a dispute to request its investigation and removal.
AI governance
Leaders in many industries hold a blanket, negative view that government regulation is an inhibitor of innovation. With AI, this bias couldn’t be anything further than the truth. Regulators and legislators are trying to protect consumers from the negative effects of technology (in reality, the human creators that misuse AI/ML) with the EU’s General Data Protection Rule, the California Consumer Privacy Act and others. However, these groups often are making demands of technology about which they have little understanding.
Granted, at the opposite end of the scale, there are the companies that clasp their metaphorical hands and say, “We are ethical; we will do no evil with your data,” although, without a standard of accountability, we won’t know for sure. For both of these extremes, and everything in between, in 2020 I predict we will see the rise of international standards to define a framework for safe and trusted AI, because regulation keeps companies honest. And I hope to see AI experts support and drive regulation of the industry, to ensure fairness and inculcate responsibility.
Experience matters
As AI grows to be a pervasive technology, there is little trust of the morals and ethics of many companies that use it. As a data scientist, that is disheartening, and not exactly what I envisioned blogging about when I started my AI Predictions blogs in 2016. However, as modern society discovers more about the damage that can be done by misuse of artificial intelligence, it’s clear that experience, not “move fast and break things” matters. It’s time for AI to grow up.
Follow me now, and in 2020, on Twitter @ScottZoldi.