Twitter’s algorithmic bias bug bounty could be the way forward, if regulators step in

Twitter opened its image cropping algorithm and gave prizes to people who could find biases in it. While interesting in itself, the program mostly reveals the impotence of regulators.

Nicolas Kayser-Bril
Reporter

On 30 July, Twitter launched the first algorithmic bias bug bounty program. They opened the code of their image cropping algorithm and asked for people – anyone could participate – to look for new biases. The algorithm in question had been accused of racist bias in September 2020, after a user showed that an image of Barack Obama and a white US politician systematically cropped out the former president.

Bug bounty programs are common in computer security. Most large software firms run them, offering monetary rewards for new bugs – the more severe the security issue, the higher the reward. One of the conditions for payment, of course, is that the bug not be made public until the software vendor had the time to fix it.

Twitter let the bug bounty run for a week. At the end, they received over 30 contributions from individuals or teams, including one high school student.

Participants showed that the algorithm cropped out people with white hair, as well as non-Latin scripts from pictures, or that it automatically focused on the last image in a comic strip, thereby ruining the suspense for the user. The winner of the bug bounty program built a tool that optimizes faces to increase the likelihood that they will not be cropped out (it turns out that younger faces with a light skin tone and stereotypically feminine traits perform better).

Twitter’s program is an unprecedented experiment in openness. The cropping algorithm is copiously documented and very easy to run (even though it seems biased against AlgorithmWatch’s logo). It is likely that more researchers will use it in the future to report new findings and develop new methodologies.

The “saliency map” of Twitter’s image cropping algorithm (left), and the actual crop (right).

Although Twitter should be lauded for this experiment, it remains far short of addressing the harms caused by algorithmic bias.

First, Twitter only opened a relatively unimportant algorithm, to which users already had unfettered, if unpractical, access (anyone could create a dummy Twitter account, automate the posting of pictures and check how they were cropped). Opening the timeline algorithm, or the one matching users to ads, would be something else entirely. Perhaps Twitter has plans to do so in the future. (Twitter refused to answer our questions.)

More prosaically, the bug bounty offered only 7,000 dollars in prizes, shared between five winners. That translates to 1,400 dollars per winner, and to 0 dollar for the roughly 25 participants who might have identified a new bias but were not selected by the jury. “It would have taken us a year” to identify all these biases, said Kendra Clarke of Twitter’s ethics team during the awards ceremony. Considering that 7,000 dollars is about two weeks’ worth of a software engineer’s wages, it is easy to see who benefited most from the program.

In contrast, the average payout for a security-related bug is over 10,000 dollars, an order of magnitude higher. Unless the financial reward increases to the point that bounty hunters can live from their findings, it is highly unlikely that algorithmic bias bug bounties will become as successful as their model in security.

Software companies pay larger sums for security bug bounties because they have an incentive to do so. Security might be a selling point for customers, and firms have regulatory obligations to disclose data breaches which, in turn, encourages them not to have data breaches.

No such incentives exist for algorithmic bias. On the one hand, the population affected by algorithmic bias are not paying customers, as Deborah Raji, a research fellow in algorithmic harms for the Mozilla Foundation, told ZDNetearlier this year.

On the other hand, although discrimination is prohibited by law, the agencies in charge of enforcing this legislation are underfunded and wholly unable to assess algorithmic bias. Things are changing, as some Member States of the European Union build their algorithmic auditing capacity. In France, PeREN, an “expertise group for digital regulation”, brings together about twenty people, including software engineers, to support regulatory bodies across the country.

The pace of the regulators’ capacity building remains much slower than the deployment of new, and potentially biased, algorithms. Bug bounty programs could help identify and fix some of these biases. But regulators must increase the pressure on software vendors and operators until they have an incentive to provide bounty hunters with decent rewards and plenty of algorithms to investigate.

Read more on our policy & advocacy work on ADM in the public sphere.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.