'AI' at The European Festival of Journalism and Media Literacy
Last week I attended the The European Festival of Journalism and Media Literacy, to join a panel on Artificial intelligence – Which skills do I need?. Moderated by Lee Hibbard, the panel consisted of writer Maria Farrell, computational linguist & YouTuber Letiția Pârcălăbescu, and me.
After a brief intro talk by Letiția – covering what ”AI” is and can (and more importantly can not) do – our discussion quite quickly drifted towards the issues of hype and power. In this, it also mirrored a point made in an earlier panel on ”AI” & journalism, in which Divina Frau Meigs argued how ”Artificial Intelligence” in itself is a misnomer, as there is nothing intelligent about it, instead it produces “Artificial Information”. This critical focus on power maybe isn’t too surprising, given how AI is mindlessly and highly problematically thrown at us these days, not only by corporations but also institutions.
Which isn’t even that new, as arguably even the British Post Office scandal that goes back to the early 2000s, in which “faulty software” (no one bothered calling that one “AI” yet) screwed up accounting, leading to hundreds of people being wrongly accused of theft & fraud. But of course, that doesn’t mean that anyone has learned a lesson from this: In the Netherlands, algorithmic decision making led to 10,000s of people being penalized over automatically generated fraud suspicions over child care benefits – based on dubious risk indicators. And in France the Caisse d’allocations familiales (CAF), part of the social security system covering family & housing benefits uses a similar approach and keeps doubling down on it despite all criticism. And in the US the National Eating Disorder Association decided to fire its helpline staff and replace them with an ”AI” chatbot when the staff unionized.
Jointly, these examples give us a great overview of the problems with deploying any automated decision making tools: Given that any predictive algorithms are trained to reproduce the most likely (or average/mediocre) output, based on all the data that was feed into the system to generate those future predictions. And thanks to that, these tools can only reproduce all of our societal problems, but faster and at scale. In case of the Dutch child care benefits, the system’s risk factors included having a dual nationality and having a low income (you know, the thing that makes you likely to request benefits in the first place…). And that’s not because “the algorithm” decided this, but because the tax authorities fed their own “blacklist” data into the system, which already before algorithmic decision making focused on “people with ‘a non-Western appearance’“. No amount of “de-biasing the training data” would fix that racist behaviour, as that seems very much wanted by the people using those systems. Typically the purpose of a system is what it does refers to unintended consequences, but it’s hard to even speak of unintended in this case.
This does not mean that using automated decision making doesn’t change anything. As the use of these tools provides a way of ‘empiricism-washing’ or giving decisions the veneer of being scientific or ”more objective”. Beyond the Dutch tax authorities, this also holds true for the British Post Office or the French CAF. The director of the latter outright claimed that their ”algorithm is neutral” and would be “the opposite of discrimination”. But of course the reality is far from that, with CAF’s approach also targeting people with low incomes, living in ’disadvantages neighorhoods’ and those with disabilities. This move towards algorithmically made decisions thus provides a way to further deepen epistemic injustices & violences, as any criticism on decisions can be swatted away by “computer says no”.
Lastly, the example of the eating disorder helpline, shows what Cory Doctorow described as one of the big risks with “AI”: We’re nowhere near a place where bots can steal your job, we’re certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job, which is exactly what happened when that chatbot gave dangerous advice to people looking for help with eating disorders. And the reasoning behind deploying automated decision making in institutions is quite similar: Maybe there is less overt squashing of labor rights, but it is equally hidden in the promised “efficiency improvements” and “savings”.
Between automating bad decisions and the increasing lack of recourse against those decisions (can’t argue with a computer after all), it’s easy to see how the current use of automated decision making contributes to hollowing out our institutions in the name of efficiency and thus undermining our trust in them. That’s why the ”AI skills” that we came up with related to the ability to critically question power in relation to these decision making tools, including: How are these tools being developed? By whom and for what purpose? And, in the service of whom are they being deployed?
And based on our panel discussion, answering those questions in practice might include supporting independent journalists & journalism, being politically engaged and consider joining your union – in case your boss might be suckered into the AI hype.
p.s. Yesterday evening I called my mom, telling her of the panel. We ended up chatting about some AI use cases that she might already interact with. We talked about how the autocorrect/autocomplete on her phone would be one that she encounters a lot. Her reaction was “I see, so it can be something useful, but you need to always double check, as otherwise it just makes up the weirdest things” – thus accidentally paraphrasing that alleged 1979 IBM slide: A computer can never be held accountable therefore a computer must never make a (management) decision.