The EU’s new content moderation law, the Digital Services Act, includes annual audit requirements for the data and algorithms used by large tech platforms, and the EU’s upcoming AI Act could also allow authorities to audit AI systems. The US National Institute of Standards and Technology also recommends AI audits as a gold standard. The idea is that these audits will act like the sorts of inspections we see in other high-risk sectors, such as chemical plants, says Alex Engler, who studies AI governance at the think tank the Brookings Institution.
The trouble is, there aren’t enough independent contractors out there to meet the coming demand for algorithmic audits, and companies are reluctant to give them access to their systems, argue researcher Deborah Raji, who specializes in AI accountability, and her coauthors in a paper from last June.
That’s what these competitions want to cultivate. The hope in the AI community is that they’ll lead more engineers, researchers, and experts to develop the skills and experience to carry out these audits.
Much of the limited scrutiny in the world of AI so far comes either from academics or from tech companies themselves. The aim of competitions like this one is to create a new sector of experts who specialize in auditing AI.
“We are trying to create a third space for people who are interested in this kind of work, who want to get started or who are experts who don’t work at tech companies,” says Rumman Chowdhury, director of Twitter’s team on ethics, transparency, and accountability in machine learning, the leader of the Bias Buccaneers. These people could include hackers and data scientists who want to learn a new skill, she says.
The team behind the Bias Buccaneers’ bounty competition hopes it will be the first of many.
Competitions like this not only create incentives for the machine-learning community to do audits but also advance a shared understanding of “how best to audit and what types of audits we should be investing in,” says Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab.
The effort is “fantastic and absolutely much needed,” says Abhishek Gupta, the founder of the Montreal AI Ethics Institute, who was a judge in Stanford’s AI audit challenge.
“The more eyes that you have on a system, the more likely it is that we find places where there are flaws,” Gupta says.