Artificial intelligence is today’s hottest technology—and how companies create and use unbiased AI should be a topic near the top of Boards of Directors’ agendas. AI is a Board-level issue because this technology makes decisions that profoundly affect all customers, in ways that can be positive or negative. This blog captures the extended conversation around “Diversity and Governance: AI Breaks into the Boardroom,” FICO’s recent LinkedIn Live broadcast featuring Dr. Scott Zoldi, Chief Analytics Officer at FICO and Vanessa Colella, Chief Innovation Officer of Citi, Head of Citi Ventures and Head of Citi Productivity.
Scott: Vanessa, thank you for joining me to talk about diversity and governance in AI, and how artificial intelligence is breaking onto Boardroom agendas. One of the reasons, a widely reported statistic from FICO’s recent survey with Corinium Global on the State of Responsible AI, indicates that 65% of the analytics executives we surveyed can’t explain how their AI models make decisions. In your role, how do you see a greater understanding of how AI works at the executive and Board level as spurring innovation? What’s the connection?
Vanessa: AI has been around for a long time and just now are we at the precipice of seeing its true promise. We are now seeing a push for explanatory AI across the industry. This integration not only helps with making AI fairer and more unbiased, but can help improve AI models as well, making it smarter.
By gaining a greater understanding of which factors are the most important predictors, people can understand how to improve and develop new products for marginalized communities. For example, if we know that an AI for credit approval is biased and we understand it is because of homeownership, of which we know that marginalized groups have a lower rate, we can not only tune the model to be fairer, but can offer new products to help first-time home buyers.
Scott: Let’s talk about the role of diversity in AI innovation. How do you think about diversity in AI at a macro level, at an organization with 210,000 employees?
Vanessa: Large organizations should be representative of the demographics of the customers they serve. Diversity and the right culture will drive innovation and as we develop new AI solutions, that diversity will help to push boundaries, creating new sources of value and driving new outcomes for smarter AI usage and deployment.
Diverse teams are needed to develop any unbiased products, not just AI. As teams need voices of marginalized communities to be represented, they can more easily recognize when things are not right and how we can better serve all our customer segments.
Scott: How do diverse teams make a difference in building AI that is unbiased? How do you see this being realized at Citi?
Vanessa: One example is as part of Citi’s $1 billion Action for Racial Equity commitment announced in April, the Citi Ventures Studio team is working on REDDI, the short name for Racial Equity in Data and Design Initiative. The goal of REDDI is to develop standards for inclusive software design that eliminate bias, including within AI models, and help deliver equitable outcomes to the communities Citi serves.
Scott: I understand you have been working on a theory called Artificial Enlightenment, the idea that we should be evolving AI models and leveraging data to perform in situ analyses. Can you tell us more about this theory and how it could help incorporate more unbiased, diverse results?
Vanessa: AI is good at carrying out narrow tasks when there is an enormous amount of relevant data available, and the situation is fairly predictable. It falls short, however, when conditions are changing rapidly and randomly. These cases require a deep, contextual understanding of the situation and all its variables─the type of analysis the human brain was built for. That is why you can teach a seven-year-old child to safely cross the street in a few minutes, but it might take seven years to teach an autonomous car to do it.
As AI continues to develop, other applications of the tools are emerging. The same technological advances that enable AI—including vastly improved compute speeds, parallel processing, the ability to handle massive heterogeneous data sets, and cloud computing—can now be used to deliver the data people need to make better decisions in situ, that is, locally and in the moment.
Used in this fashion, these capabilities are beginning to give us actionable insights into global challenges such as climate change, public health, food production, supply chain management, and finance. This has the potential to create new opportunities for collaborative innovation to improve our interconnected society and open the door to an era of Artificial Enlightenment, or AE.
Scott: Companies are well-versed in the tenets of corporate governance: operating with accountability, fairness, transparency, and responsibility. In your view, how can Boards improve their ability to understand and trust AI by adopting a model governance framework based on this approach?
Vanessa: I believe the most important things a Board can do when adopting a governance model are establishing a one enterprise view and guidance for the understanding, use and deployment of AI. Second, making the necessary investment in training across the organization for all employees who will use the tool. One vision and education should be the core tenants.
Scott's next LinkedIn Live will be broadcast on August 3, in conversation with Ganna Pogrebna, Lead for Behavioral Data Science at The Alan Turing Institute. They’ll dive into the topic of AI and regulation in “Is Your AI Ethical? The Answer May Surprise You.” Sign up for the session here, and follow Scott on Twitter @ScottZoldi.