Curtis Runstedler at his desk

About Curtis Runstedler

Electric Dreams: Literary Narratives as Tools for Critically Reflecting on Intelligent Systems

Electric Dreams: Literary Narratives as Tools for Critically Reflecting on Intelligent Systems

Dr. Curtis Runstedler studies human-robot interactions and trust/distrust in contemporary AI narratives. He researches how these narratives negotiate ideas of trust and distrust in literature, particularly how human characters predominantly seem to distrust robots and the reader’s response to such interactions. 

The themes of trust are central to the field of AI today, especially with regards to AI regulation and compliance in machine learning, but also more widely to public perceptions of AI and our everyday interactions with emerging intelligent systems. While the novels he looks at primarily deal with advanced AI, they also help to problem solve current issues surrounding trust and AI, particularly with integrating AI into our everyday lives. Fictional narratives provide useful storyworlds for simulating or envisioning human-robot interactions and trust, which are then useful for gauging public responses and assumptions about AI. In Klara and the Sun (Kazuo Ishiguro, 2021), for example, the “Artificial Friend” Klara is presented as arguably more human-like and selfless than her human counterparts, but despite this, the humans around her constantly view her with distrust, and (spoiler alert!) despite her good actions, they leave her at a landfill to waste away by the novel’s end.

Curtis Runstedler was asekd the following questions:

Curtis, how did you get into your research? Did you already study robots when you were young?

When I completed my Ph.D. (on medieval alchemy and Middle English poetry), I became more and more interested in the intersections between robotics and the occult sciences, especially after reading E.R. Truitt’s Medieval Robots. The missing link, I think, was Paracelsus’ homunculus, and when I came to University of Stuttgart, I was really interested in how ideas of robots and trust manifested in the contemporary English novel, and how readers and human characters responded to these new technologies. I’ve always liked robots, especially in popular culture, and I’ve always been a proponent of encouraging relationships between literature and science.

Which three books were the most formative and important on this topic for you?

These three novels particularly motivated me for this topic: Kazuo Ishiguro’s Klara and the Sun (2021), Ian McEwan’s Machines Like Me (2019), and Annalee Newitz’s Autonomous (2016). Bonus: Truitt’s Medieval Robots!

What is special about your research?

My research is special because there’s not much written about AI narratives and the concept of trust within the humanities (at least yet). I think my research can also show the importance of literary narratives and storytelling for conveying scientific information and generating discussions from the public, and not just from the Ivory Tower of academia. Public outreach is super important with AI narratives.

How exactly do you do your research? Do you read books or do you let programs do the work for you?

My research is primarily based on close readings of novels interlinked with theoretical frameworks, such as reader-response theory and trust theory (mainly based in philosophical approaches to trust at the moment).

What goals do you pursue with them?

I hope to develop a conceptualization and methodology of trust within the context of AI that can be applied to the humanities, but hopefully also open up avenues of discussion and collaboration with other disciplines, such as sociology, psychology, computer sciences, and beyond.

What can we learn from robots?

I think certain narratives, such as Klara’s and Annie’s in Sierra Greer’s Annie Bots, can help us to think of robots beyond our anthropocentric lenses; that is, beyond what is just useful for humans

Do you think there will one day be robots controlled by AI?

I’m personally more worried about humans misusing technology for nefarious ends than being controlled by AI. If I was old and battered and I needed a cybernetic enhancement or AI implant, I would probably accept it. But you never know!

When could robots be self-aware?

This could possibly happen one day or perhaps never at all. There’s a big leap between where machine learning is now and the advanced AI of, say, Klara and the Sun or Adam in Machines Like Me. I think it’s worth pointing out that we are quite far off from the advanced AI we see depicted today. 

Can you say something about your habilitation topic?

Yes, I have just started writing my Habilitation and I am still navigating my topic but enjoying where it takes me. There’s a richness of contemporary literature that is largely unexplored or seldomly looked at (such as Alexander Weinstein’s excellent short story “Saying Goodbye to Yang”), so it’s quite an honour to further research these novels and do a deeper dive, so to speak.

What are your plans? Where do you see yourself in 10 years?

I’d like to complete my Habilitation and publish my second book on this topic and hopefully receive my full professorship. I’d also like to continue to collaborate on interdisciplinary approaches to AI research, especially since I think the humanities have much to offer in terms of the value of narratives and storytelling. In 10 years, I hope I can still be teaching and researching and working on something meaningful, and perhaps write a Western novel or three.

Contact

Curtis Runstedler at his desk
 

Keplerstr. 17, Stuttgart, Room: 4.030

This image shows Simone Brandes

Simone Brandes

Dipl.-Kulturwis.

Public Engagement Coordinator IRIS

To the top of the page