Platform regulation

Not a solution: Meta’s new AI system to contain discriminatory ads

Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.

Meta’s automated advertising system is extraordinarily efficient at targeting and delivering ads to users most likely to click on them.

However, this drive for efficiency or “optimization” contains an inherent tradeoff: It can exclude certain people from having an equal opportunity to see specific ads, which reinforces existing societal biases and disparities.

For example, when a truck driver job is posted on Facebook, Facebook overwhelmingly won’t show those ads to women because the algorithm determines that men are more likely to click on the ad. When AlgorithmWatch ran this experiment in Germany, the ad for truck drivers was shown on Facebook to 4,864 men but only to 386 women.

Even when advertisers forgo explicitly selecting a target audience, the algorithm will discriminate anyway. Such algorithmic discrimination can systematically disadvantage those who are already marginalized, which is especially problematic when the advertiser in question is an employer, a landlord, or a credit agency.  

It could also be illegal, given that European law forbids discrimination based on a series of protected criteria such as race and gender. However, it’s rare for individuals who have been discriminated against to recognize that their rights have been violated or to seek redress — after all, if someone never saw an ad for a job for which they would be eligible, how would they know that they hadn’t seen it?

After years of failing to clean up the problem—and despite mounting evidence of biased ad delivery in the U.S. and in Europe—Meta has largely managed to avoid legal culpability for its role in perpetuating algorithmic discrimination.

That changed somewhat in June 2022, when Meta reached a legal settlement with the U.S. Department of Justice that would force the company to make housing ads more equitable on the platform. The settlement followed a series of lawsuits from the American Civil Liberties Union (ACLU) and other civil rights groups.

As part of the settlement, Meta agreed to remove the Special Ad Audiences tool which advertisers had used to indirectly profile and target certain groups. Meta was also required to build and deploy a new “Variance Reduction System” (VRS) to, in effect, de-bias the machine learning algorithms within its ad targeting and delivery system, specifically to reduce bias based on sex and race/ethnicity for housing ads.

Instead of optimizing for clicks, the VRS is designed to optimize for equal “accuracy” of ad delivery across eligible target audiences. Once an ad is seen by enough users, the VRS measures the aggregate age, gender, and “estimated race or ethnicity distribution” of those who have already seen the ad and compares that with the broader eligible audience who could have potentially seen the ad, then adjusts ad delivery accordingly.

One year after the settlement, a compliance report prepared by the management consultancy firm Guidehouse claims that Meta’s VRS is working as intended. But gaps in the report make these claims hard to verify. And discriminatory ads continue to run rampant on Facebook in domains other than housing, as well as in other jurisdictions such as Europe, where the VRS is unlikely to be implemented.

Meta’s VRS may represent a narrow win for U.S.-based civil rights groups. But it also shows the limits of Meta’s patchwork approach to fixing the inherent risks of its targeted advertising system—risks that will remain unchecked unless regulators step in.

What’s in an audit?

Under the terms of its settlement with the U.S. Justice Department, Meta contracted Guidehouse (a spin-off of the Big Four auditing firm PwC), to act as an independent, third-party reviewer of its VRS compliance.

In the first compliance report, published in June, Guidehouse verifies that Meta complied with the relevant VRS metrics for both sex and estimated race/ethnicity for housing ads with at least 300 ad impressions, as well as housing ads with more than 1,000 ad impressions.

But this report doesn’t hold up under outside scrutiny, says Daniel Kahn Gillmor, a privacy advocate and technologist for the ACLU. He calls it “an overly technical report that still doesn’t answer all the important technical questions.”

Among Gillmor’s observations is that Guidehouse apparently did not gain access to Meta’s internal data in order to conduct a thorough audit. The auditor instead reviewed spreadsheets provided by Meta. And these spreadsheets omit crucial information.

For example, are there ads that get a million impressions? We don’t know, says Gillmor, because the data Meta provided doesn’t include a breakdown of the number of housing ads which exceed 1,000 ad impressions. Nor does Meta acknowledge the number of users whose race and gender are “unknown” by the system.

Granted, gender and especially racial categories are suspect in themselves given that these are not fixed characteristics, like height or age, which can be neatly categorized. And Meta itself has acknowledged that the statistical model which undergirds the VRS can be less accurate for people of color than it is for white people.

Putting these legitimate concerns aside, says Gillmor, Meta doesn’t require a perfect model of gender or race to run its de-biasing algorithm. But auditors do need better data to evaluate whether the system is effective or not. And the auditor should at least be able to report on the relative scale of the dark matter that Meta didn’t let them measure.

“There are a number of pieces that are missing here, and I don’t even know if Guidehouse got them from Meta. And as a good auditor, I would expect a group like Guidehouse to say, ‘if you don’t give us the data, we can’t do the audit.’”  

“They should at least document how big their blind spot is,” Gillmor says. And the auditor’s blind spots are glaring enough to call into question whether Meta’s reported variance reduction rate was accurately measured. 

A band-aid that doesn’t scale

Muhammad Ali, a postdoctoral researcher in responsible AI at Boston’s Northeastern University, co-authored an influential 2019 paper on how Meta’s ad delivery can lead to biased outcomes. He acknowledges the extraordinary effort behind the VRS but says that it fails to address the problems inherent to the company’s ad optimization system.

“From an engineering standpoint, it’s very commendable what they built,” he says, referencing a dense paper in which 12 Meta engineers detail the methods behind the VRS. “But it’s such a weird, complex system to fix one tiny, particular issue [housing ads] because it was legally mandated, but which is really just an artifact of how the system was designed.”

Meta has announced plans to also apply the VRS for credit and employment ads within the United States. However, Ali says, we shouldn’t expect Meta’s VRS to extend to jurisdictions outside the U.S. anytime soon without an immense engineering effort. That’s because the VRS relies on a model which uses U.S. census data (addresses and surnames) in order to calculate the “estimated race or ethnicity” of ad audiences.

“In a context of a place like India where you have things like caste, you don’t have datasets to estimate the caste of someone from their surname. So, you’d have to collect that data and engineer a completely different system,” he says.

“With more engineering effort, you could solve this narrow vertical in other countries as well. But it will always be a narrow vertical.”

Besides jobs, housing, and credit ads, Ali points out that we should also worry about a range of problematic ads. For example, “if you have data about someone who is interested in diet content, they might end up seeing a lot of weight loss ads. If you have data about someone who’s interested in gambling and betting, they might see a lot of online slot machines and betting ads.”

When it comes to targeted ads, there is a need to protect people with vulnerabilities like addiction or body image issues. But this will remain a challenge as long as the system is designed for profit and not for the benefit of the users.

What do EU regulations say?

Under the EU’s new Digital Services Act (DSA) which is now applicable to the largest platforms and search engines operating in Europe, platforms are no longer permitted to target ads using “sensitive” categories of data. This includes categories like race, gender, religion, sexual orientation.

Platforms are furthermore required to identify and mitigate “systemic risks” stemming from their services, which includes any negative effects for the exercise of fundamental rights and the prohibition of discrimination as enshrined in Article 21 of the EU Charter.

These legal frameworks would seem highly relevant to reining in Meta’s discriminatory ad delivery system. But for now, the DSA’s top enforcer seems more concerned with curbing illegal wartime content than addressing longstanding issues like algorithmic discrimination or potential health risks stemming from algorithmic optimization in ad targeting.

Nevertheless, it’s clear that Meta’s piecemeal approach to mitigating automated bias in ad delivery won’t fix the inherent risks in its targeted advertising system. The company may have to make fundamental changes to their ad-based business model to bring it in line with the law, and not just because of the DSA (the European Data Protection Board recently issued a binding decision banning Meta’s targeted advertising practices under the GDPR—Meta, for their part, hopes to fulfill the GDPR’s consent requirement for data processing by offering an ad-free subscription model, effectively making users pay for their privacy).

The case of Meta’s VRS is also instructive for European regulators and watchdogs as they prepare to evaluate a litany of Big Tech compliance reports under the DSA. The VRS’ auditing requirement, which is subject to court oversight in the U.S., is reminiscent of the more complex auditing requirements which are laid out in the DSA.

Under the DSA, the largest tech platforms and search engines operating in Europe must subject themselves to annual audits by independent auditors. These auditors—who, like Guidehouse, are contracted by the tech companies—will have privileged access to their clients’ data and personnel to check whether they are complying with the DSA’s obligation to identify and mitigate systemic risks.

These “systemic risk” audits are already underway. But it remains ambiguous how platforms intend to define and measure so-called systemic risks, much less mitigate them. And until regulators establish guidelines for how systemic risk assessments should be done, platforms will set their own benchmarks which auditors will use to evaluate their performance.

Such audits may thus result in audit-washing with self-adopted methodologies and standards, potentially undermining the effectiveness and reliability of the auditing process given auditors’ incentive to retain lucrative auditing contracts.

Big Tech compliance reports not a substitute for external scrutiny

Regardless of how these risk assessments and audits play out, it’s clear that the public will soon be awash in compliance reports from Big Tech companies. The first round of DSA transparency reports from some of the largest platforms were published in late October; redacted audit reports of platforms’ systemic risk assessments are expected to be published late next year.

This kind of transparency is a good thing, in that it gives regulators, watchdogs, and civil society organizations something more concrete to scrutinize. But it will also open the door to many more questions—questions that won’t be resolved simply through more “transparency” given how limited platforms’ disclosures are.  

That’s why official audits, whether under U.S. or European law, will not erase the need for truly independent, adversarial audits to test platforms’ algorithmic systems. Adversarial audits are carried out by individuals and organizations that aren’t on Meta’s payroll (nor the payroll of any other Big Tech platform); thus, adversarial auditors do not have a financial conflict of interest.

To enable adversarial audits, it’s necessary for Meta to allow public interest researchers to set up accounts that impersonate regular users, for example, or collect data donations from real users, so that researchers can independently test and observe whether different types of users are being delivered different ads based on their profiles. However, projects that employ such adversarial researchers methods have been shut down by Meta in the past and subject to intimidation.

The DSA might offer protections and opportunities for such researchers via its data access framework, but this has yet to come to fruition. The European Commission still needs to iron out the details in a delegated regulation expected early next year; even that framework may not prevent platforms from suing researchers whose findings they don’t like.

There are serious challenges to empowering public interest research which is critical to holding Big Tech accountable for risks which may not be fully captured in transparency reports.

Until we have legally enforced protections for public interest researchers and secure pathways for them to access platform data, we’ll be left to deal with platforms’ self-assessments and audit reports. A flood of these reports are on the way. But these are no substitute for truly independent research which is critical to holding platforms accountable for the risks their services pose to society.

Meta declined to respond our questions besides pointing to the company’s existing public disclosures on VRS. Guidehouse responded with a "no comment" as they were "committed to protecting the confidentiality of our client engagement and related information." 

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.