“Explainable AI” doesn’t work for online services – now there’s proof

New regulation, such as the GDPR, encourages the adoption of “explainable artificial intelligence.” Two researchers claim to have proof of the impossibility for online services to provide trusted explanations.

Story

12 November 2019

Nicolas Kayser-Bril
Reporter

Most algorithms labelled “artificial intelligence” automatically identify relationships in large data sets, unbeknownst to the humans who program them. No one at Facebook decides on what advertisements are displayed on your newsfeed. Instead, an algorithm learned that, based on about 900 variables such as age, location and more, people like you were more likely to click on these specific advertisements.

Because the inner logic of such algorithms is not known, they are often called black boxes. Making them less opaque has long been a concern for computer scientists, who began work on “explainable AI” in the 1970s.

However, the tools of explainable AI require to have unfettered access to the algorithm under scrutiny. The usual method is to bombard it with requests to understand how the output varies according to slight changes in input.

The right to explanation

Recent legislation such as the GDPR introduced a “right to explanation.” Organizations must be able to explain any decision based on an algorithm to the people impacted. Facebook offers a feature called “Why am I seeing this ad?” Google does as well. Vendors even offer off-the-shelf products, such as IBM’s “AI Explainability 360” toolkit, for organizations to install on their own algorithms.

There is just one problem with these solutions: They cannot be trusted.

Mathematical proof

Erwan Le Merrer and Gilles Trédan are researchers at the French National Center for Scientific Research. Their latest paper, which was submitted to the “Fairness, Accountability and Transparency” conference, “proves the impossibility of remote explainability for single explanations.”

“It’s like a bouncer in front of a night club,” they said in an interview to AlgorithmWatch. A bouncer who has been told not to let any Person of Color in can give any false reason why a Person of Color is being denied access. “He can tell you that you wore the wrong shoes or the wrong shirt, and not reveal the actual basis for the decision.”

Like a bouncer, an algorithm on a remote system could hide the true reason behind a decision and no one would know. Mr. Le Merrer and Mr. Trédan claim that whatever the decision being explained, the true reason can always be covered by an alternative explanation.

A way to uncover deceitful explanations would be to find what the researchers call “incoherent pairs.” If a bouncer tells one person that she did not get in because of white shoes and that someone with white shoes subsequently gets in, we know the bouncer lied. Unfortunately, finding incoherent pairs when the algorithm has access to hundreds of variables is close to impossible.

Tighter controls

The control of explainable AI should be done at least partly on-premises, Mr. Le Merrer and Mr. Trédan said. An explanatory algorithm could be built in a way that it changes its answers depending on who makes the requests. There are precedent in other industries. In what became known as the Dieselgate scandal, Volkswagen and other car manufacturers programmed the software of some cars to behave differently during pollution tests than it did on the road.

The issue is not unlike hygiene inspection, they added. It would be impossible to control restaurants by looking at the dishes they serve. This is why food inspectors go in the kitchen and measure the fridge temperature and look for rodents and other signs of unhygienic practices. Similar systems must be developed if algorithms are to be seriously controlled.

Aleksandra Sowa, an expert in IT compliance and and an adviser to the Committee on Fundamental Values of the German social-democratic party, agrees that an independent authority is needed. “Auditors of algorithmic systems should not only have legal and technical expertise,” she wrote in an email interview. “They should also have the ability to control and interpret the results of their review.” The authority should have enough resources to investigate and sanction wrongdoings, she added.

The algorithmic police might soon come into being. On November 12, the German government announced a control authority for AI, to start in 2020, but its precise mission is still unclear.


Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.