It’s been a couple of weeks since the White House released its Blueprint for an AI Bill of Rights, a 73-page handbook that was promptly, and probably predictably, pilloried with headlines like, “Biden’s AI Bill of Rights is Toothless Against Big Tech” and “Does the White House AI Bill of Rights Amount to Anything?” Well, I am often complimented as being contrarian and I, in fact, think the AI Bill of Rights provides the right direction for where our industry needs to go. And not a moment too soon.
A first step toward artificial intelligence systems regulation
As I looked through the White House’s AI Bill of Rights handbook I thought, “It’s about time to more formally elucidate the protections citizens should have from companies’ use – and misuse – of AI.” There are rules governing citizens’ data privacy when it comes to exposure to a myriad potentially harmful actions, and decisions made by unsafe algorithms should not be overlooked. Business leaders and artificial intelligence practitioners need to be able to answer citizen demands about how algorithms were developed – are these technology policies safe? Unbiased? Do they respect citizens data privacy? And what data drives these artificial intelligence systems’ outcomes?
These are consistent themes in my work to formally codify innovation into standard practices for building Responsible AI that is explainable, ethical and auditable. I’m delighted that one of my key innovations for Responsible AI, the use of blockchain technology for model management governance, was recently awarded a patent by the U.S. Trademark and Patent Office.
The march toward AI systems regulation in the United States, now kicked off with the AI Bill of Rights, is a familiar pattern. In FICO’s earlier days, we pioneered the use of scorecard analytic technology to model credit risk. Passage of the Fair Credit Reporting Act in 1970 dictated that algorithms used for this purpose must be accurate, fair, and transparent to data used, ushering in stringent regulation and strong consumer rights.
The White House AI Bill of Rights is the first step toward what I believe will be similar regulation of AI and machine learning algorithms. It’s my great hope that this new Bill will set the standard such that organizations will take AI regulation more seriously, and act systematically to demonstrate adherence to governance and AI systems development standards.
A wake-up call for companies
AI models are business tools; they are neither science projects nor are they omniscient. I’ll invoke my favorite quote by the statistician George Box: “All models are wrong, but some are useful.” In this spirit, companies must be sure that the algorithms they choose must be demonstrably compliant with the AI Bill of Rights. It is their responsibility to use AI responsibly; this in turn requires an appropriate infrastructure of corporate defined standards of AI model development and deployment. It also requires a rigorous AI governance framework, in order to demonstrate adherence to established standards and auditable AI in deploying these tools.
The AI Bill of Rights is, in short, a wake-up call that organizations can’t prudently just “run with” algorithms without the right governance support system. If they do, you can guess what is likely to happen next.
A handbook for AI advocacy groups
The tech industry has largely scoffed at the AI Bill of Rights – but in many ways we are not the intended audience. The technical companion portion of the document spells out what citizens should expect from AI systems technology, such as, “You should be protected from unsafe or ineffective systems,” and “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.” The AI Bill of Rights is truly a bill of rights for citizens, codifying what may be common knowledge for the tech-savvy, but perhaps not for the everyday all-American.
The quasi-Constitutionality that the name “AI Bill of Rights” infers will further fuel AI advocacy groups. As I wrote in my 2020 AI predictions blog:
AI advocacy groups will fight back … Already there is all manner of ways that people are treated unfairly in our society, with advocacy groups to match. With AI advocacy, there may be a different construct, because consumers and their advocacy groups will demand access to the information and process on which the AI system made its decision. AI advocacy will provide empowerment, but it also may drive significant debates between AI experts to interpret and triage the data, model development process, and implementation.
Again, it all comes back to AI governance. When AI advocacy litigation calls for:
- the analysis and explanation of data use
- the model development standard followed
- demonstration of adherence in implementation
… the governance infrastructure absolutely must already be in place.
A new form of customer experience
Ultimately, I believe that the AI Bill of Rights provides companies with a new opportunity to provide a superior customer experience – to show the world they’ve worked to comply with the Bill, and publish model and AI systems development standards. It’s a chance for companies to differentiate themelves based on how they prevent algorithm discrimination, demonstrate safety, and continually monitor their AI,
The message for consumers? Choose the companies you do business with carefully. Much like how consumers use companies’ environmental, social and governance (ESG) track record as to who to do business with or not, demonstrated responsible use of AI algorithms will fuel customer trust and sentiment. The AI Bill of Rights provides a yardstick against which companies will be measured and held to account. Certainly, ambiguous corporate pledges to “do no evil” will be grossly deficient, and consumers will know their rights – and be ready to brandish them.
Follow me on LinkedIn and Twitter @ScottZoldi to keep up with my latest thoughts on the AI Bill of Rights, Responsible AI, AI governance, machine learning, artificial intelligence and data science.