Algorithms are increasingly being used in public services to determine how resources are distributed, and the decisions these algorithms make often disadvantage the most marginalised and vulnerable in society.
Is it possible that public services algorithms are not simply objective and neutral tools, but rather processes that contain political and economic assumptions about what types of people are worthy of ‘care’ and which are not?
Is it further possible that such algorithms are ideological tools created and administered by elites, designed and used to exclude ever increasing amounts of people from public service provision, thus forming part of a broader neoliberal agenda?
These are just some of the questions which Virginia Eubanks asks in her latest book Automating Inequality: How High Tech Tools Profile, Police and Punish the Poor.
In this book she considers several ways in which algorithms are used in biased ways in public services, three of which I consider below, along with some of the implications.
The Allegheny Family Screening Tool
The Allegheny Family Screening Tool (AFST) is based on big-data kept since 1999, and is used to predict which kinds of children are most likely to neglected or abused in the future. The tool is basically a statistical regression which identifies the background factors of previously abused children, which are then used to help case workers decide which reports they should screen for further investigation.
However, the database only has data on children and parents who have received public assistance, in other words the poor. It doesn’t include information about the harms that might be happening to middle-class kids.
The problem with this is that poor kids who aren’t being abused may be wrongly flagged as being abused, while middle class kids who are being abused could just go under the radar.
This seems to be a case of confusing poor parenting with poverty – with the poor subjected to extra scrutiny and surveillance, which may not be the most effective way of tackling abuse across all social class groups.
Interestingly Eubanks says that bias against the poor has always been around, but now such databases are just reinforcing this even more.
The Electronic registry of the un-housed in Los Angeles
There at 58,000 unhoused people in Los Angeles, with 75% completely unsheltered, and the goal of the registry is to do determine who the most vulnerable unhoused people are, so they can be housed.
The problem with the registry is that it asks people a barrage of moral and potentially incriminating questions – for example:
• ‘Are you having unprotected sex?
• Are you selling drugs for money?
• Is there an open warrant for your arrest?
• Have you thought about harming yourself or others?’
The responses to the survey are shared with over hundred different agencies across Los Angeles County and some of the information is available to the LAPD, which they can get by simply making a verbal request.
Some of the people who have done the survey feel as if they are being asked to incriminate themselves in exchange for a slightly better chance of getting housing.
The Automation of benefit claims in Indiana
In Indiana in 2006, the governor signed a $1.4 billion contract with a collection of high-tech companies (including IBM) to automate all of the state’s eligibility processes for the welfare programs.
They did this by replacing local case workers with online forms and regional call centres, which meant that anyone claiming benefits no longer had a person who knew their case.
If anyone made mistake filling in one of the forms online, or if the state made a mistake, or IBM made a mistake, and there were A LOT of mistakes made, the claimant was told they were ineligible for benefits and had them cut, and then faced a monumental struggle and weeks if not months of delays to get them reinstated.
Millions of people lost out on benefits they were entitled to and people even died as a result.
The project was such a failure that the governor cancelled the contract three years into a 10-year contract, for which IBM sued and received an additional $50 million on top of the $437 million already collected.
However, from a neoliberal perspective, this experiment did its job: many people who applied for benefits and were wrongly rejected, even if they successfully fought to get them reinstated, are now reluctant to claim benefits again, even if they’re entitled to them, because the experience of dealing with that system was so horrendous.
Final Thoughts/ Conclusions
The problem with relying on databases and algorithms in public services to determine who should and shouldn’t get access to scarce resources is that we are using them as an ‘empathy over-ride’ – ‘I don’t want to decide which of the 58 000 homeless people in LA gets housed, so let’s just enter some data and leave the decision to a computer’.
Then there’s the fact that relying on algorithms is basically like doing triage on social problems the system can’t cope with – the reason we need algorithms is that there aren’t enough resources – so perhaps we need to get over neoliberalism, get a grip on tax havens, and provide sufficient resources to meet social need, then we wouldn’t need algorithms to make the unpleasant decisions humans cannot face.
It might just be that the use of algorithms in public services compounds social problems – as we focus on collecting ever increasing amounts of data and using it more effectively, when what we really need to solve our welfare problems is political change?
Eubanks suggests that the kinds of abuses states are making of algorithms might be because we are still in the ‘Wild West’ period of big data: in which the people who create and use these databases have a kind of ‘anything goes’ mentality about how they can use the data collected.
However, this might be coming to an end, because more and more people, especially those in marginalized communities that have been most harmed by these systems is that people are becoming more concerned about how their data is being shared, whether it’s legal and whether it’s morally right, and they are developing ways of resisting giving away so much data.
Virginia Eubanks is a political science professor at the State University of New York