What is an “algorithm”? It depends who you ask


Likewise, New York City is considering Int 1894, a law that would introduce mandatory audits of “automated decision-making tools in employment”, defined as “any system whose function is governed by statistical theory, or systems whose parameters are defined by such systems”. Notably, both bills mandate audits but only provide high-level guidance on what an audit is.

As policymakers in government and industry create standards for algorithmic audits, disagreements over what counts as an algorithm are likely. Rather than trying to agree on a common definition of “algorithm” or on a particular universal audit technique, we suggest evaluating automated systems primarily on the basis of their impact. By focusing on results rather than inputs, we avoid unnecessary debates about technical complexity. What matters is the potential for harm, whether it’s an algebraic formula or a deep neural network.

Impact is a critical appraisal factor in other areas. It’s integrated into the classic FEAR cybersecurity framework, which was first popularized by Microsoft in the early 2000s and is still used by some businesses. The “A” in DREAD asks threat assessors to quantify “affected users” by asking how many people would be impacted by an identified vulnerability. Impact assessments are also common in human rights and sustainability analyzes, and we’ve seen some early AI impact assessment developers create similar rubrics. For example, Canada Algorithmic impact analysis provides a score based on qualitative questions such as “Are customers in this industry particularly vulnerable? (Yes or no).”

What matters is the potential for harm, whether it’s an algebraic formula or a deep neural network.

It is certainly difficult to introduce a loosely defined term, such as “impact”, into any evaluation. The DREAD framework was subsequently supplemented or replaced by STRIDE, in part due to challenges by reconciling different beliefs about what threat modeling entails. Microsoft stopped using DREAD in 2008.

In the field of AI, conferences and journals have already introduced impact statements with varying degrees of success and controversy. This is far from foolproof: impact evaluations that are purely worded can easily be played, while a definition that is too vague can lead to arbitrary or incredibly long evaluations.

Yet it is an important step forward. The term “algorithm”, however defined, should not be a shield to absolve the humans who designed and deployed any accountability system for the consequences of its use. This is why the public is increasingly demanding algorithmic accountability – and the concept of impact provides useful common ground for the different groups working to meet this demand.

Kristian lum is an Assistant Research Professor in the Department of Computer and Information Science at the University of Pennsylvania.

Rumman chowdhury is the Director of Twitter’s Machinery Ethics, Transparency and Accountability (META) team. She was previously CEO and Founder of Parity, an algorithmic audit platform, and Global Head of Responsible AI at Accenture.



Leave a Reply

Your email address will not be published. Required fields are marked *