#riskassessment (5 results)

Why we need to audit algorithms and AI from end to end

The full picture of algorithmic risks and harms is a complicated one. So how do we approach the task of auditing algorithmic systems? There are various attempts to simplify the picture into overarching, standardized frameworks; or focus on particular areas, such as understanding and explaining the “black box” of models. While this work and thinking have benefits, we need to look at systems from end to end to fully capture the reality of algorithmic harms.

Researching Systemic Risks under the Digital Services Act

AlgorithmWatch has been investigating the concept of "systemic risks" under the Digital Services Act (DSA), the new EU regulation which aims to minimize risks and increase accountability for online platforms and search engines.

Interview

New audits for the greatest benefits possible

Oliver Marsh is the new head of AlgorithmWatch’s project "Auditing Algorithms for Systemic Risks." He told us about his background and the goals that he will be persuing.

New project: Auditing Algorithms for Systemic Risks

Assessing algorithmic systems is an indispensable step towards the protection of democracy, human rights, and the rule of law. At the moment, it is largely unclear how it can and should be done. With the support of Alfred Landecker Foundation, we are developing ideas, procedures, and best practices to effectively assess algorithms for systemic risks, test them in practice, and advocate for their adoption.

Visa-free travelers to the EU will undergo “risk” checks from 2023. Who counts as risky remains unclear

Two EU agencies, Frontex and eu-LISA, are developing ETIAS, a new system that automatically assesses the “risk” posed by some travelers. The sorting algorithm will be trained in part with past decisions by border guards.