Despite (or perhaps because of) being one of the cheesier movies I can recall from the 1980s, “Highlander” (1986) has had enormous staying power in nerd culture. I attribute its enduring popularity to three things: a timeless battle between immortal warriors, a bangin’ soundtrack by Queen, and the inimitable Sir Sean Connery’s one-liner: “In the end, there can be only one.”
As it turns out, “There can be only one” has become a bit of a mantra in the tech world. For example, in 2009 a tech CEO wrote about innovation and the Highlander Principle in Harvard Business Review. A couple of weeks ago legendary tech journalist Kara Swisher chatted extensively about it with former Twitter CEO Dick Costolo on her podcast “Sway.” In talking about the concept of a decentralized Twitter enabled by blockchain, “Any good tech nerd has to know the Highlander reference,” Costolo joked. “If you don’t you’re kicked out of the club.”
A Single Corporate-Wide Standard for AI Development
As the head of FICO’s data science organization, “There can be only one” has been a mantra of mine for a long time, too. It’s been a fixture in my talks about model development for years, and in June 2021 I wrote about the Highlander Principle in a blog about the importance of Auditable AI:
Many companies suffer from many data science religions—individual groups or, worse, renegade scientists who march to the beat of their own philosophical drum. In some cases, critical pieces of the model governance are simply, and disturbingly, not addressed. Moving from research mode to production mode requires that data scientists and companies have a firm standard in place. Since I…think that innovation should be driven by the Highlander Principal (“There can only be one.”), here are the questions your organization needs to ask in developing Auditable AI:
- How is the analytic organization structured today?
- How is the existing governance committee of analytic leaders structured?
- How is Responsible AI being addressed?
- What is the state of the data ethics program and data usage policies?
- What are the AI development standards?
- How is the company achieving Ethical AI?
- What is the company’s philosophy around AI research?
- Is the company uniformly ethical with its AI?
(The questions above were edited for brevity. For the full version read my blog “Beyond Responsible AI: 8 Steps to Auditable Artificial Intelligence.”)
Why “There Can Be Only One”
Many organizations, including data science teams, derail innovation with internal competition that fragments resources and energy. In a recent article about how to foster healthy intracompany rivalry, MIT Sloan Management Review has this as its first guiding principle:
1. Unify with common purpose. To engage in healthy competition inside organizations, people need to see themselves as united by a common purpose and a higher calling. At NASA, for example, employees’ strong belief that their work contributes to a greater purpose provides an effective counterbalance to a results-driven and competitive internal culture. Every year for nearly a decade, NASA has ranked No. 1 in employee satisfaction among large federal agencies.
For data science organizations, the Highlander Principle establishes not only a single common purpose—to create Responsible AI that is innovative, ethical, explainable and auditable—but a single detailed corporate-wide framework on how to do so. This is what AI governance and model governance are all about: hammering out a singular corporate vision for fair, unbiased use of AI, and governing the path to achieve it with common principles, processes and tools. It’s not fast and it’s not easy. But if you want your company’s investment in AI to have the staying power of “Highlander[SZ1] ,” it’s true that “there can be only one.”