Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/35613
Appears in Collections:History and Politics Journal Articles
Peer Review Status: Refereed
Title: Structured like a language model: Analysing AI as an automated subject
Author(s): Magee, Liam
Arora, Vanicka
Munn, Luke
Contact Email: vanicka.arora@stir.ac.uk
Keywords: AI
psychoanalysis
automated subjects
large language models
reinforcement learning from human feedback (RLHF)
chatbot interviews
Issue Date: 8-Nov-2023
Date Deposited: 21-Nov-2023
Citation: Magee L, Arora V & Munn L (2023) Structured like a language model: Analysing AI as an automated subject. <i>Big Data and Society</i>, 10 (2). https://doi.org/10.1177/20539517231210273
Abstract: Drawing from the resources of psychoanalysis and critical media studies, in this article we develop an analysis of large language models (LLMs) as ‘automated subjects’. We argue the intentional fictional projection of subjectivity onto LLMs can yield an alternate frame through which artificial intelligence (AI) behaviour, including its productions of bias and harm, can be analysed. First, we introduce language models, discuss their significance and risks, and outline our case for interpreting model design and outputs with support from psychoanalytic concepts. We trace a brief history of language models, culminating with the releases, in 2022, of systems that realise ‘state-of-the-art’ natural language processing performance. We engage with one such system, OpenAI's InstructGPT, as a case study, detailing the layers of its construction and conducting exploratory and semi-structured interviews with chatbots. These interviews probe the model's moral imperatives to be ‘helpful’, ‘truthful’ and ‘harmless’ by design. The model acts, we argue, as the condensation of often competing social desires, articulated through the internet and harvested into training data, which must then be regulated and repressed. This foundational structure can however be redirected via prompting, so that the model comes to identify with, and transfer, its commitments to the immediate human subject before it. In turn, these automated productions of language can lead to the human subject projecting agency upon the model, effecting occasionally further forms of countertransference. We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.
DOI Link: 10.1177/20539517231210273
Rights: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
Licence URL(s): http://creativecommons.org/licenses/by/4.0/

Files in This Item:
File Description SizeFormat 
magee-et-al-2023-structured-like-a-language-model-analysing-ai-as-an-automated-subject.pdfFulltext - Published Version733.78 kBAdobe PDFView/Open



This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.