“Under the Hot Rod Hood”: Dr. Scott Zoldi’s AI Series on LinkedIn Live

Season 2 just started! In case you missed Season 1, here’s a recap of Scott’s conversations with Jordan Levine and Cortnie Abercrombie on Artificial Intelligence (AI).

Dr. Scott Zoldi, FICO’s chief analytics officer, scored a pandemic hit with his LinkedIn streamcast series, “Expect the Unexpected: AI and Bias, the Boardroom, Blockchain and Business.” Season 2 “Under the Hot Rod Hood: The Data Science of AI” kicked off on April 20 and you can replay Episode 1 here. In this blog we recap Scott’s Season 1 conversations with Jordan Levine, Partner at Dynamic Ideas LLC and MIT lecturer, and Cortnie Abercrombie, CEO and Founder of AI Truth.

“Is AI Biased?” A Conversation with Jordan Levine

As an AI practitioner and educator, teaching graduate students at the Massachusetts Institute of Technology, Jordan brought an incisive pragmatism to his conversation with Scott. His “brass tacks” approach to thwarting AI bias has three steps:

Step 1: The business must accept accountability: “Accountability for AI lies with the business’ decision maker, such as the leader with profit and loss (P&L) responsibility,” Jordan said. He noted the dissonance of this statement with the current state of Responsible AI, in which “43% of [survey] respondents say they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people's livelihoods – i.e. audience segmentation models, facial recognition models, recommendation systems.” “That’s not the way the media see it,” he said. “The P&L owner is accountable for the AI decisions his or her business makes.”

Scott agreed. “Driving that conversation is important, because regardless of regulation, there’s accountability. Boards of Directors have a responsibility to make sure their companies have and adhere to a code of standards around model development, that would dictate how it’s done, uniformly. That code of governance spells out the roles and responsibilities of business owners and data scientists.” He added, “Blockchain is one of my favorite technologies for AI governance because it drives accountability, which is a big part of AI ethics.”

Step 2: Determine what biases exist that we know of: Jordan listed his “big four” of biases to be correlation bias, representation bias, measurement bias and disenfranchisement bias. “We need get really crisp on specific bias issues, and then think about tools that exist to address them,” Jordan said. “To me, that’s the way forward, to take a complex and high-level discussion of bias and make it actionable.”

Step 3: Apply data science tools to address bias: Jordan cited Scott’s data science blogs as providing excellent discussion around key data science tools, essential weaponry in fighting bias. These include monotonicity, palatability and observability.

Otherwise, AI will become one of tech’s enduring tropes, “garbage in, garbage out.”

“AI is a representation of the data that’s fed to it,” Scott said, and asked Jordan, “How prevalent do you think that [GIGO] is today? Who within organizations is actively thinking about the care and feeding of AI models? Is it in everyone’s purview, given how important AI is?”

Jordan replied, “Garbage in, garbage out persists today. But I would assert that it’s quite addressable; we all have the technology tools to do so. The gap is an understanding between business teams and analytics teams; a tech team can rapidly generate univariate plots, for example, but before going to the model phase a member of the business team needs to review those plots with a red pen to determine what makes sense and what doesn’t,” adding further dimension to his assertion that business leaders need to own the ultimate responsibility for AI.

Jordan Levine

Watch Scott’s entire interview with Jordan.

“What Is Responsible AI? Robust, Explainable, Ethical, Efficient” A Conversation with Cortnie Abercrombie

Cortnie’s 11-year stint in ethical AI at IBM, as well as her experience as a founding editorial board member of AI and Ethics Journal, give her a uniquely broad perspective on the evolution of Responsible AI. In her conversation with Scott, Cortnie reflected on the past, present and future of Responsible AI.

Because there are “AI ‘pods’ within companies, analytic city-states,” it’s hard to institute a standard around Responsible AI, Cortnie said, to provide “strong corporate governance around how we do checks and balances around AI.”

Perhaps surprisingly, she thinks that the US government could help with governance frameworks and auditing procedures. “The Joint AI Council at the Department of Defense is trying to lead by example,” she said. “But I think a bigger challenge is how much do people actually know? Do our legislators know enough to understand what kinds of laws and regulations should be passed?” She noted that while Europe’s General Data Protection Regulation (GDPR) has been in force for several years, only two states, New York and California, have passed similar laws.

“The environment of AI is like much of the tech industry: ‘move fast and break things,’ and Agile [development] is an unspoken norm; companies expect to see something from their AI teams in six to eight weeks. That’s where 90% of what goes wrong is around data,” Cortnie said, to which Scott quipped, “Data is a liability that sometimes provides some value.” Encouragingly, though, Cortnie noted that “there are conversations occurring in California about risk frameworks.”

What about self-regulation on an industry basis? “I’m very pleased to see the IEEE 7000 standard talking about ethics for developing AI systems,” Scott said, asking Cortnie, “Do you see hope in terms of standard for industries?”

Cortnie had two answers: “It’s complex because, first, how new is the industry we’re talking about, such as self-driving cars? In comparison, financial services is very well understood. Second, how high are the stakes? The stakes of self-driving car safety are obviously very high. So I hope a new industry like self-driving cars will adopt self-regulation but I don’t have a lot of faith because it is changing so rapidly.”

“I am proud of the level of maturity in self-regulation in financial services, and hope to see more industries engage in sharing at this level,” Scott agreed, asking Cortnie, “where do you see maturity at?” She believes “anything that gets automation applied to it – such as robotic process automation – and anytime you want to take humans out of the equation,” is ripe for scrutiny and thus maturity. “People want to know, ‘What will this thing do when I set it loose and it’s learning?’"

“We are talking about ML capabilities, a close cousin to predictive analytics, on steroids,” she continued. “Most companies are immature with their AI and ML capabilities – but we’re all trying!”

Cortnie Abercrombie

Watch Scott’s entire interview with Cortnie Abercrombie.

Season 2: Under the Hotrod Hood: The Data Science of AI

Scott’s Season 2 of LinkedIn Live kicked off on April 20 with Agus Sudjianto, EVP and Head of Corporate Risk Modeling at Wells Fargo. Watch the conversation here: Breaking Down the "Black Box" of AI with Interpretable Models.

Follow Scott on Twitter @ScottZoldi and LinkedIn to keep up with his latest thoughts on Responsible AI.

chevron_leftBlog Home

Related posts

Take the next step

Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.