'AI' from below

James Scott & Georgia Aitkenhead sitting on panel on a stage, in the background a slide reading 'AUIK' and "Nothing about us without us"

Last week, I attended “AI” UK – which was organised by The Alan Turing Institute and where I served in a minor role on the program advisory committee – and a workshop on Responsible “AI”, co-organised by some fellow fellows of the Software Sustainability Institute. At both events I got the chance to talk about the different ways of implementing participatory approaches into how we do scientific research – and why doing so is important. And, somewhat continuing similar conversations at the The European Festival of Journalism and Media Literacy, there was increasing talk about doing and informing data science from below.

At “AI” UK, one of the sessions in the very first slot was titled Nothing About Us Without Us. This panel included our own citizen science work on sensory processing and autism with AutSPACEs (represented by Georgia & James), as well as representatives of the People’s Panel on AI, which Connected by Data did in November last year. Their deliberative process included the public, represented by 12 participants, in envisioning “AI” policy making. Jointly, the panel outlined how participatory methods can be used to allow us to move from being data subjects into agents that can influence how decisions about data and its use are being made.

This sentiment was mirrored, in slightly different form, at the session Data, Labour and “AI”. The panel was moderated by journalist Billy Perrigo, who discussed with sociologist Karen Gregory, Matt Buckley of United Tech & Allied Workers union branch, and Mophat Okinyi – who has worked in outsourced content moderation and is one of the founders of the African Content Moderators Union. Mophat shared his first-hand experiences in doing content-moderation and how tech companies outsource this kind of work, in particular to the global south, while offering little in terms of pay and worker safety. The panel discussed how this type of “hidden outsourcing” – be it for content moderation of social networks or training of language models – is done very deliberately, to hide the actual labor that goes into enabling any “AI” or automation.

This type of hidden labor is in a way just an extension of the longer tech history of fauxtomation, in which “automation” is nothing but hidden workers doing the actual work at the end of the day. Karen Gregory noted that given the fact that there are plenty of workers with very intimate knowledge of how this “automation” works, it is bizarre that we keep talking to executives and sales people about the risks of “AI”, and not those with that first-hand experience from the other side: Relating this to her research with food delivery gig workers, she highlighted how those actually working within these technical systems are the ones that become the experts of these fields, effectively becoming ethnographers of their work. But their viewpoints remain marginalized, partially because anthropology is kinda out these days, but also because colonialism and its friends aren’t dead: Their work is done in the global south and then it also looks a lot like care work, which was historically and still remains highly undervalued.

To me, all of these factors make the current general purpose generative “AI” hype even more insidious: It’s not only that the technology itself mostly remain a solution in desperate need of a problem, or that it is one that will be used to foster further “deskilling” (to paraphrase Sci-Fi author Adrian Tchaikovsky during his “AI” UK session: Art is a craft that needs to be improved and outsourcing any part of your creative work means you are not improving your skills). But it’s that most, if not all, of these commercial tools are created using highly exploitative practices – from getting the data from questionable sources with even more questionable consent all the way to the labor practices that go into preparing those models. Which means that there’s no way to ethically use any of these tools/models in the first place. And I don’t think there will be a way any time soon – unless we (analogously to Sandra Harding’s Sciences from Below) start doing “tech from below”, beginning with listening to those who will be affected by these technologies and developing it collectively.

Bastian Greshake Tzovaras

Bastian Greshake Tzovaras

Generally, things are better if you put open* in front of them.

orcid scholar rss facebook twitter github youtube mail spotify instagram linkedin flickr mastodon