#auditing (8 results)

Why we need to audit algorithms and AI from end to end

The full picture of algorithmic risks and harms is a complicated one. So how do we approach the task of auditing algorithmic systems? There are various attempts to simplify the picture into overarching, standardized frameworks; or focus on particular areas, such as understanding and explaining the “black box” of models. While this work and thinking have benefits, we need to look at systems from end to end to fully capture the reality of algorithmic harms.

Researching Systemic Risks under the Digital Services Act

AlgorithmWatch has been investigating the concept of "systemic risks" under the Digital Services Act (DSA), the new EU regulation which aims to minimize risks and increase accountability for online platforms and search engines.

Platform regulation

Not a solution: Meta’s new AI system to contain discriminatory ads

Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.

Interview

New audits for the greatest benefits possible

Oliver Marsh is the new head of AlgorithmWatch’s project "Auditing Algorithms for Systemic Risks." He told us about his background and the goals that he will be persuing.

Risky business: How do we get a grip on social media algorithms?

Public scrutiny is essential to understand the risks that personalized recommender systems pose to society. The DSA’s new transparency regime is a promising step forward, but we still need external, adversarial audits to hold platforms accountable.

New project: Auditing Algorithms for Systemic Risks

Assessing algorithmic systems is an indispensable step towards the protection of democracy, human rights, and the rule of law. At the moment, it is largely unclear how it can and should be done. With the support of Alfred Landecker Foundation, we are developing ideas, procedures, and best practices to effectively assess algorithms for systemic risks, test them in practice, and advocate for their adoption.

How researchers are upping their game to audit recommender systems

Recommender systems, such as a social network’s news feed or a streaming service’s recommendations, are notoriously difficult to audit. Despite high hurdles, practitioners from journalism and academia are pushing forward.

Towards accountability in the use of Artificial Intelligence for Public Administrations

Michele Loi und Matthias Spielkamp analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.