From ensuring that autonomous cars are programmed not to run over pedestrians to preventing recommendations algorithms from radicalizing users, algorithmic accountability is high on the agenda of governments and lawmakers across the world. But legislation is nothing without enforcement. It is clear that street-level police officers will not able to tell a rogue algorithm from a legal one. Who will, however, is much less clear.
Self-regulation is one possibility. The European Commission is considering introducing a label for “trustworthy Artificial Intelligence.” Firms would voluntarily commit to respecting a set of rules and receive a seal of approval from a European authority. Denmark’s government announced a similar program last October, which has yet to be launched.
Cathy O’Neil, a mathematician who wrote the acclaimed Weapons of Math Destruction, started her own certification bureau, called ORCAA or O'Neil Risk Consulting & Algorithmic Auditing. Her organization delivers “seals of approval” if an algorithm is deemed robust enough.
As long as they are voluntary and paid for by the companies themselves, labels and seals are unlikely to deter bad actors. Ms O’Neil is aware of this. “The clients I have tend to have really good and thoughtful algorithms,” she wrote in an email. The bad ones “would never hire me to audit them.” (Read her complete interview.)
Several countries are keen to improve their algorithmic policing skills. In France, a high-ranking public official who asked not to be named told AlgorithmWatch that the government was preparing a task force of data scientists. They would support the country’s regulators in their attempts to investigate algorithms. The competition authority could summon data from e-commerce platforms and ask the task force to look into it for evidence of price-fixing, for instance.
At the European level, some argue that the upcoming Digital Services Act, a European directive that will be negotiated this year, should create a similar body. However, some regulators are wary of pooling resources and would prefer to develop their own teams of data scientists, claiming that much of the expertise required for algorithmic auditing is context-specific and cannot be reused across sectors (AlgorithmWatch reported in December).
No matter how these experts are organized, their work would only be trustworthy if they had access the platforms' actual data and could scrutinize the algorithms running on their production servers. They would need physical access to the data centers, much like hygiene inspectors come unannounced in a restaurant’s kitchen.
ML is the new CDO
Varoon Bashyakarla, a data scientist on a fellowship from the American Council On Germany, recently conducted research on the cybersecurity insurance market in New York. Over the past few years, he anecdotally observed a migration of employees from big financial institutions on Wall Street to big tech companies. One of the consequences of this shift, he theorizes, is that “the attitudes towards risk in the tech sector are starting to resemble the attitudes towards risk in the financial sector.”
Tech behemoths “privatize benefits from our data and externalize the associated costs and risks onto the public,” Mr Bashyakarla added, in a manner that reminds him of big banks' behavior in the run up to the 2008 financial crisis. Some tech companies are starting to behave as if they were “too big to fail,” assuming that regulators will not go after them.
Several key institutions already rely on tech giants, from France’s “health data hub”, entrusted to Microsoft’s Azure, to Germany’s federal police’s reliance on Amazon’s servers.
Societies around the world are still coping with the consequences of the absence of policing of the financial sector in the 2000s. Lawmakers and police forces should not let it happen again.
Did you like this story?
Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!