IRIS and Friends

On October 17th, we were represented at the K1 Foyer as part of Stuttgart's Science Festival with IRIS.

On October 17th, we were represented with IRIS in the K1 foyer as part of the Stuttgart City Science Festival. 11 scientists presented their research areas on mostly interactive posters. There were exhibits to experience and touch, and our driving robot Double was also there.

Interesting conversations took place not only with our visitors, but also among each other. The researchers got to know each other and even came up with further joint projects.

Here we introduce the individual people and their respective exhibition theme:

Alexander Achberger zeigt sein System.

How can we make virtual reality in automotive engineering more realistic? Alexander Achberger addressed this question in his doctoral thesis at the Visualization Institute (VISUS) of the University of Stuttgart and in the Mercedes-Benz Group AG. Alexander presented how new haptic devices like PropellerHand, STRIVE, and STROE are redefining the boundaries of virtual reality in the automotive sector. He presented his haptic feedback system STRIVE, which he invented and developed during his PhD. STRIVE is capable of physically simulating digital collisions - across the entire body. STRIVE allows the wearer, for example, to physically bump their head when exiting a virtual car.

 

Tabea Berberna zeigt auf ihr Poster.

Tabea Berberena is concerned with the increasing prevalence of chatbots in our daily lives and questions how trust in these technologies is formed. She aims to investigate what influences trust in chatbots. In a study, participants interacted with various chatbots presented differently and received advice on ethical decisions in different professional situations. Which chatbot is trustworthy?

 

Ivan Dula vor seinem Poster im Gespräch.

Ivan Dula presented his poster on "Dynamics of addiction to AI in organizations." As organizations increasingly integrate AI into their operational processes, it is becoming increasingly important to understand the nature of human reactions to its implementation. Studies show that people react to automation differently - they reject AI and its advice, are receptive to it, or let it take over decision-making processes without adequate supervision. His research focuses on the connection between the underlying system structure and the observed behaviors, uncovering how people become addicted to using AI in organizational contexts and what they can do to avoid it.

Lukas Erhard vor seinem Poster.

Lukas Erhard is an academic staff member at the Interchange Forum for Reflecting on Intelligent Systems (SRF IRIS). In this context and as part of the chair for Computational Social Science (CSS), he sits at the interface between social and computer sciences and works on the integration of machine learning techniques into the social science research process. Since the beginning of his master's studies, his methodological focus has been on the processing of large amounts of data, statistics, machine learning, natural language processing and social network analysis.
His dissertation project analyzes the influences of (social) media on the opinion-forming process and the attitudes of individuals.

Sara Hanke stehend und erklärend.

Sara Hankes research interests lie in particular in (right-wing) extremist and populist parties and attitudes, as well as their influence on party competition and political communication. She is looking at which political issues are taken up by extreme parties in the European Parliament. She is analyzing whether it helps other parties to win voters if they also adopt these issues. Sara is also investigating whether this also influences their political success - a highly topical issue in this super election year of 2024.

Susanne Hindennach

Susanne Hindennach deals with the question of how we talk about AI systems, because this influences how we perceive them and how we deal with them. Terms such as "artificial intelligence" attribute mental states to the systems. Susanne Hindennach has investigated whether researchers themselves use such terms and whether this influences how AI systems are perceived. In particular, she was interested in whether the attribution of mental states leads to people holding the systems responsible.

Amelie Schreck und Gili Ron sitzend auf einem Tisch, dazwischen steht ihr Roboterarm.

Gili Ron and Amelie Schreck are working on the project “Towards human-robot co-agency: AI and feminist technoscience perspectives on diversity, demography and democracy in human-robot collaboration in architecture”. This project at the interface of architectural informatics and social sciences addresses the productivity and sustainability challenges in the construction industry with a novel approach to human-robot collaboration (HRC) that combines intuitive communication, AI and user feedback. This aims to attract a more diverse workforce and foster creativity, agency and trust by making industrial robots better collaborators with humans. The project integrates feminist technoscience perspectives and focuses on equitable development and decision-making, engaging various stakeholders to ensure a fair and inclusive adoption of HRC methods in architecture.

Curtis Runstedler

Curtis Runstedler is an IRIS postdoctoral fellow in the Department of Literatures and Cultures. His research examines how literary narratives about human-robot interactions convey ideas of trust and confidence in 21st century fiction. He studies human-robot interactions and trust/distrust in contemporary AI narratives. He explores how these narratives treat ideas of trust and distrust in literature, particularly how human characters seem to predominantly distrust robots and how the reader responds to such interactions. The issue of trust is central to the field of AI today, particularly with regard to AI regulation and machine learning compliance, but also more generally to public perceptions of AI and our everyday interactions with emerging intelligent systems. While the novels he studies primarily deal with advanced AI, they also help address current issues related to trust and AI, particularly the integration of AI into our daily lives. Fictional narratives provide useful stories to simulate or imagine human-robot interactions and trust, which in turn is useful for assessing the public's reactions and assumptions regarding AI. For example, in "Klara and the Sun" (Kazuo Ishiguro, 2021), the "artificial friend" Klara is portrayed as more human-like and selfless than her human counterparts, but despite this, she is constantly viewed with suspicion by those around her, and (spoiler alert!) despite her good deeds, they leave her to rot in a landfill at the end of the novel.

Mara Seyfert

Mara Seyfert and Marvin Tritschler are investigating together generative AI: their research addresses the opportunities and risks that generative AI - and LLMs in particular - bring to democratic systems. In the political debate, LLMs can serve both as a source of knowledge and promote disinformation. Unlike traditional search engines, LLMs represent a form of conversation partner for users. Does this already make them actors within the political discourse?

Cornelia Sindermann

Cornelia Sindermann deals with the ever-present danger that generative AI - including large language models such as ChatGPT - simplifies the creation of disinformation ("fake news") and thus contributes to further "infodemics". Against this background, it is becoming increasingly important that people themselves can distinguish true information from disinformation, especially the inclusive disinformation created by generative AI. In the short test by the "Computational Digital Psychology" team of the "Interchange Forum for Reflecting on Intelligent Systems" at the University of Stuttgart, visitors were able to test for themselves how well they can distinguish true information from disinformation.

This image shows Simone Brandes

Simone Brandes

Dipl.-Kulturwis.

Public Engagement Coordinator IRIS

To the top of the page