“Lawmakers should provide rule-based descriptions of what it means not to be racist”

Cathy O’Neil is a mathematician and the author of several books on data science, including Weapons of Math Destruction, which looked at the impact of algorithms on society. In an e-mail interview, she explained how her business, ORCAA, audits algorithms.

Editor's note: This is the companion interview to the article The algorithm police is coming. Will it have teeth?

 

AlgorithmWatch: Insofar can private-sector regulation (e.g. through labeling) improve algorithmic accountability?

Cathy O’Neil: I don't have enormous hopes for self regulation, if that's what you mean. The clients I have tend to have really good and thoughtful algorithms. The really bad and destructive algorithms tend to be made by companies who would never hire me to audit them. We'd need real, enforced federal regulation to get those companies to the table.

How should legislators organize the policing of algorithms – if at all?

For each law they should say what it means for an algorithm to be compliant. This is not a trivial matter but it is doable. It comes down to an explicit, rule-based description of what it means to be, for example, not racist with hiring algorithms.

How do you resolve the conflict of interest at the heart of certification, that certification businesses have no incentive to go against their clients' wishes?

This is hard. Part of the problem is the client pays, part of it is that nobody really understands algorithms anyway so there's an asymmetry of information.

For the first, I've been trying to convince, say, large companies who use hiring algorithms to hire me to audit those algorithms, with the argument that they're the ones on the hook if something goes wrong. Unfortunately they simply don't think they really would be on the hook. So we need some legal or regulatory standards to be established to get this business model to work.

Second, the problem that everybody kind of trusts the algorithms and even then trusts the auditors. This can be mitigated if the methodology of the audit is publicly available, even if the source code of the algorithms they measure is not. This is down the road though, once we have rules in place as to what the laws mean.

Your website mentions that ORCAA provides "public-facing documentation" of audits. Could you point me to the ones that are already published?

ORCAA has only done one of those, for Rentlogic, and there's nothing in the contract that stipulated that they must make it available. The truth is, it's such an early moment in this new field of algorithmic auditing that ORCAA has very little leverage on such matters.

My long term vision:

  • We decide what it means for, say, hiring algorithms to comply with the Americans with Disabilities Act of 1990 or insurance pricing algorithms to be non-discriminatory by race or credit algorithms to be non-discriminatory by gender or employment advertising to be non-discriminatory with respect to age.
  • We develop auditing tools that test those algorithms to make sure they're complying with those rules.
  • We publish the methodology and post the results of those audits regularly so people can complain about our methodology or what have you.
  • Companies who use these algorithms (most of them are licensed) are told they can only use algorithms that have been audited and found to comply with all relevant laws.

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.