Skip to content | Accessibility Information

Jackson, D., Courneya, M., 2023.

Unreliable Narrator: Reparative Approaches to Harmful Biases in AI Storytelling for the HE Classroom And Future Creative Industries

Output Type:Journal article
Publication:Brazilian Creative Industries Journal
Publisher:Universidade Feevale
ISBN/ISSN:2763-8677
URL:doi.org/10.25112/bcij.v3i2.3540
Volume/Issue:3 (2)
Pagination:16

Generative AI has the potential to amplify marginalised storytellers and their narratives through
powerful virtual production tools and automation of processes such as artworking, scriptwriting and
video editing (Ramesh et al., 2022; Brown et al., 2020, Esser et al, 2023). However, adoption of generative
AI into media workflows and outputs risks compounding cultural biases from dominant storytelling
traditions. Generative AIs typically require the input of many millions of novels, screenplays, images and
other media to generate their synthetic narrative output. Stories produced can then contain biases from
these texts through stereotypical character tropes, dialogues, word-image associations and story arcs
(Bianchi et al., 2022). Whilst there is significant discussion of these biases, little exists to date on how we
prepare storytellers for the problems of generative AI in production. How can we engage without further
isolating marginalised storytellers, and in a way that encourages new voices to be heard?
The paper examines the potential issues of AI generative technologies for marginalised students
in the creative education sector and provides case studies that provide a pathway towards a reparative
approach by creative producers and educators. It provides an introduction to some of the issues arising
from the reproduced biases of these LLMs and suggests potential strategies to incorporate awareness
of these biases into the creative process . In order to evidence and illustrate our approach, two short
case studies are provided: the Algowritten AI short story project led by the authors with other volunteers
as a part of Mozilla Foundation's Trustworthy AI to identify patterns of bias in AI written narratives and
a novel reflective AI system called Stepford, which is designed to highlight instances of gender bias in
generative text segments. Both case studies are intended to outline how reparative approaches to
algorithmic creative production can seek to highlight and mitigate cultural biases endemic in generative
media systems.