an abstract image featuring gradient transitions between various shades of blue

IRIS Publications

A collection of IRIS-related publications by IRIS Members

Publications by IRIS Members

  1. 2025

    1. Ron, G., Wortmann, T., Kropp, C., & Menges, A. (2025). Human-Robot Reconfigurations: Advancing Feminist Technoscience Perspectives for Human-Robot-Collaboration in Architecture and Construction. In M. Kanaani (Ed.), The Routledge Companion to Smart Design Thinking in Architecture & Urbanism Fora Sustainable, Living Planet (1st edition, pp. 669–679). Routledge, Taylor & Francis Group. https://doi.org/10.4324/9781003384113-72
  2. 2024

    1. Knuples, U., Falenska, A., & Miletić, F. (2024). Gender Identity in Pretrained Language Models: An Inclusive Approach to Data Creation and Probing. In Y. Al-Onaizan, M. Bansal, & Y.-N. Chen (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 11612--11631). Association for Computational Linguistics. https://aclanthology.org/2024.findings-emnlp.680
    2. Dönmez, E., Vu, T., & Falenska, A. (2024). Please note that I’m just an AI: Analysis of Behavior Patterns of LLMs in (Non-)offensive Speech Identification. In Y. Al-Onaizan, M. Bansal, & Y.-N. Chen (Eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 18340--18357). Association for Computational Linguistics. https://aclanthology.org/2024.emnlp-main.1019
    3. Sindermann, C. (2024). Relations between different components of group identification and types of social media political participation in the context of the Fridays for Future movement. Personality and Individual Differences, 230, 112773. https://doi.org/10.1016/j.paid.2024.112773
    4. Erhard, L., Hanke, S., Remer, U., Falenska, A., & Heiberger, R. H. (2024). PopBERT. Detecting Populism and Its Host Ideologies in the German Bundestag. Political Analysis. https://doi.org/10.1017/pan.2024.12
    5. Hillebrand, M. C., Sindermann, C., Montag, C., Wuttke, A., Heinzelmann, R., Haas, H., & Wilz, G. (2024). Salivary cortisol and alpha-amylase as stress markers to evaluate an individualized music intervention for people with dementia: feasibility and pilot analyses. BMC Research Notes, 17(1), Article 1. https://doi.org/10.1186/s13104-024-06904-7
    6. Brandenstein, N., Montag, C., & Sindermann, C. (2024). To Follow or Not to Follow: Estimating Political Opinion From Twitter Data Using a Network-Based Machine Learning Approach. Social Science Computer Review. https://doi.org/10.1177/08944393241279418
    7. Kaiser, J., & Falenska, A. (2024). How to Translate SQuAD to German? A Comparative Study of Answer Span Retrieval Methods for Question Answering Dataset Creation. In P. H. Luz de Araujo, A. Baumann, D. Gromann, B. Krenn, B. Roth, & M. Wiegand (Eds.), Proceedings of the 20th Conference on Natural Language Processing (KONVENS 2024) (pp. 134--140). Association for Computational Linguistics. https://aclanthology.org/2024.konvens-main.15
    8. Chen, H., Roth, M., & Falenska, A. (2024). What Can Go Wrong in Authorship Profiling: Cross-Domain Analysis of Gender and Age Prediction. In A. Faleńska, C. Basta, M. Costa jussà, S. Goldfarb-Tarrant, & D. Nozza (Eds.), Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP) (pp. 150--166). Association for Computational Linguistics. https://aclanthology.org/2024.gebnlp-1.9
    9. Costa jussà, M., Andrews, P., Basta, C., Ciro, J., Falenska, A., Goldfarb-Tarrant, S., Mosquera, R., Nozza, D., & Sánchez, E. (2024). Overview of the Shared Task on Machine Translation Gender Bias Evaluation with Multilingual Holistic Bias. In A. Faleńska, C. Basta, M. Costa jussà, S. Goldfarb-Tarrant, & D. Nozza (Eds.), Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP) (pp. 399--404). Association for Computational Linguistics. https://aclanthology.org/2024.gebnlp-1.26
    10. Faleńska, A., Basta, C., Costa jussà, M., Goldfarb-Tarrant, S., & Nozza, D. (Eds.). (2024). Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP). Association for Computational Linguistics. https://aclanthology.org/2024.gebnlp-1.0
    11. Go, P., & Falenska, A. (2024). Is there Gender Bias in Dependency Parsing? Revisiting ``Women’s Syntactic Resilience’’. In A. Faleńska, C. Basta, M. Costa jussà, S. Goldfarb-Tarrant, & D. Nozza (Eds.), Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP) (pp. 269--279). Association for Computational Linguistics. https://aclanthology.org/2024.gebnlp-1.17
    12. Sardari, S., Sevim, S., Zhang, P., Ron, G., Leder, S., Menges, A., & Wortmann, T. (2024). Deep Agency: Towards human guided robotic training for assembly tasks in timber construction. In & G. W. Odysseas Kontovourkis, Marios C. Phocas (Ed.), Proceedings of the 42 (No. 420; Vol. 1, Issue 420, pp. 193–202). eCAADe (Education and Research in Computer Aided Architectural Design in Europe). https://papers.cumincad.org/data/works/att/ecaade2024_420.pdf
    13. Ron, G., Menges, A., & Wortmann, T. (2024). Critical Collaboration: Reflecting on Power and Agency in Human-Robot-Collaboration in Architecture and Construction, for a Diverse and Democratic Practice. In P. Eversmann, C. Gengnagel, J. Lienhard, M. Ramsgaard Thomsen, & J. Wurm (Eds.), DESIGN MODELLING SYMPOSIUM KASSEL 2024 – SCALABLE DISRUPTORS (re)new(able) materials and circular design and construction processes (No. 1; Vol. 1, Issue 1, pp. 191–204). Springer. https://doi.org/10.1007/978-3-031-68275-9_16
    14. Babiker, A., Alshakhsi, S., Sindermann, C., Montag, C., & Ali, R. (2024). Examining the growth in willingness to pay for digital wellbeing services on social media: A comparative analysis. Heliyon, 10(11), Article 11. https://doi.org/10.1016/j.heliyon.2024.e32467
    15. Falenska, A., Vecchi, E. M., & Lapesa, G. (2024). Self-reported Demographics and Discourse Dynamics in a Persuasive Online Forum. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 14606--14621). ELRA and ICCL. https://aclanthology.org/2024.lrec-main.1272
    16. Sindermann, C., Löchner, N., Heinzelmann, R., Montag, C., & Scholz, R. W. (2024). The Revenue Model of Mainstream Online Social Networks and Potential Alternatives: A Scenario-Based Evaluation by German Adolescents and Adults. Technology in Society, 102569. https://doi.org/10.1016/j.techsoc.2024.102569
    17. Kannen, C., Sindermann, C., & Montag, C. (2024). On the willingness to pay for the messenger WhatsApp taking into account personality and sent/received messages. Heliyon, e28840. https://doi.org/10.1016/j.heliyon.2024.e28840
    18. Scholz, R. W., Köckler, H., Zscheischler, J., Czichos, R., Hofmann, K.-M., & Sindermann, C. (2024). Transdisciplinary knowledge integration PART II: Experiences of five transdisciplinary processes on digital data use in Germany. Technological Forecasting and Social Change, 199, 122981. https://doi.org/10.1016/j.techfore.2023.122981
    19. Sindermann, C., Montag, C., & Elhai, J. D. (2024). The Degree of Homogeneity Versus Heterogeneity in Individuals’ Political News Consumption - https://econtent.hogrefe.com/doi/abs/10.1027/1864-1105/a000417?journalCode=zmp. Journal of Media Psychology. https://doi.org/10.1027/1864-1105/a000417
    20. Hagendorff, T. (2024). Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. ArXiv, 1–25. https://arxiv.org/abs/2402.08323
    21. Zermiani, F., Dhar, P., Strohm, F., Baumbach, S., Bulling, A., & Wirzberger, M. (2024). Individual differences in visuo-spatial working memory capacity and prior knowledge during interrupted reading. Frontiers in Cognition, 3. https://doi.org/10.3389/fcogn.2024.1434642
    22. Vaugrante, L., Niepert, M., & Hagendorff, T. (2024). A Looming Replication Crisis in Evaluating Behavior in Language Models? Evidence and Solutions. https://arxiv.org/abs/2409.20303
    23. Hagendorff, T. (2024). Deception abilities emerged in large language models. Proceedings of the National Academy of Sciences, 121(24), Article 24. https://doi.org/10.1073/pnas.2317967121
    24. Scholz, R. W., Zscheischler, J., Köckler, H., Czichos, R., Hofmann, K.-M., & Sindermann, C. (2024). Transdisciplinary knowledge integration – PART I: Theoretical foundations and an organizational structure. Technological Forecasting and Social Change, 202, 123281. https://doi.org/10.1016/j.techfore.2024.123281
    25. Meding, K., & Hagendorff, T. (2024). Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms. Philosophy & Technology, 37(1), Article 1.
    26. Alshakhsi, S., Babiker, A., Sindermann, C., Al-Thani, D., Montag, C., & Ali, R. (2024). Willingness to pay for digital wellbeing features on social network sites: a study with Arab and European samples. Frontiers in Computer Science, 6, 1387681. https://doi.org/10.3389/fcomp.2024.1387681
    27. Jalali Farahani, F., Hanke, S., Dima, C., Heiberger, R. H., & Staab, S. (2024). Who is targeted? Detecting social group mentions in online political discussions. Companion Publication of the 16th ACM Web Science Conference, 24–25. https://doi.org/10.1145/3630744.3658412
    28. Schneider, M., & Hagendorff, T. (2024). When Image Generation Goes Wrong: A Safety Analysis of Stable Diffusion Models. https://arxiv.org/abs/2411.15516
    29. Wirzberger, M., Bareiß, L., Herbst, V., Stock, A., & Kembitzky, J. (2024). Performance Expectancy Benefits Acceptance Towards Digital Self-Control Support. In SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4924933
    30. Berberena, T., & Wirzberger, M. (2024). The Impact of User Momentary Emotional State on Trust in a Faulty Chatbot. In SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4924934
  3. 2023

    1. Williams, J. R., Sindermann, C., Yang, H., Montag, C., & Elhai, J. D. (2023). Latent profiles of problematic smartphone use severity are associated with social and generalized anxiety, and fear of missing out, among Chinese high school students. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 17(5), Article 5. https://doi.org/10.5817/CP2023-5-7
    2. Zhang, Y., Yao, S., Sindermann, C., Rozgonjuk, D., Zhou, M., Riedl, R., & Montag, C. (2023). Investigating autistic traits, social phobia, fear of COVID-19, and internet use disorder variables in the context of videoconference fatigue. Telematics and Informatics Reports, 11, 100067. https://doi.org/10.1016/j.teler.2023.100067
    3. Fanton, N., Falenska, A., & Roth, M. (2023). How-to Guides for Specific Audiences: A Corpus and Initial Findings. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), 321--333. https://doi.org/10.18653/v1/2023.acl-srw.46
    4. Meyer, F., & Marthe, S. (2023). On the Collaboration Between Robots and Humans: a Conversation with Gili Ron and Thomas Wortmann (F. Meyer, Ed.). https://www.baunetz.de/baunetzwoche/baunetzwoche_ausgabe_8266279.html
    5. Hagendorff, T. (2023). Information Control and Trust in the Context of Digital Technologies. In C. Eisenmann, K. Englert, C. Schubert, & E. Voss (Eds.), Varieties of Cooperation (pp. 189--201). Springer Fachmedien Wiesbaden.
    6. Hagendorff, T., Bossert, L. N., Tse, Y. F., & Singer, P. (2023). Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals. AI and Ethics, 3(3), Article 3.
    7. Vetter, D., Amann, J., Bruneault, F., Coffee, M., Düdder, B., Gallucci, A., Gilbert, T. K., Hagendorff, T., van Halem, I., Hickman, E., Hildt, E., Holm, S., Kararigas, G., Kringen, P., Madai, V. I., Wiinblad Mathez, E., Tithi, J. J., Westerlund, M., Wurth, R., & Zicari, R. V. (2023). Lessons Learned from Assessing Trustworthy AI in Practice. Digital Society, 2(3), Article 3.
    8. Bossert, L., & Hagendorff, T. (2023). The ethics of sustainable AI: Why animals (should) matter for a sustainable use of AI. Sustainable Development, 31(5), Article 5.
    9. Hagendorff, T., Fabi, S., & Kosinski, M. (2023). Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nature Computational Science, 1--9.
    10. Ðula, I., Berberena, T., Keplinger, K., & Wirzberger, M. (2023). Hooked on artificial agents: a systems thinking perspective. Frontiers in Behavioral Economics, 2, 1223281. https://doi.org/10.3389/frbhe.2023.1223281
    11. Erhard, L., Hanke, S., Remer, U., Falenska, A., & Heiberger, R. H. (2023). PopBERT. Detecting populism and its host ideologies in the German                  Bundestag. CoRR, abs/2309.14355. https://doi.org/10.48550/ARXIV.2309.14355
    12. Masur, P. K., Hagendorff, T., & Trepte, S. (2023). Challenges in Studying Social Media Privacy Literacy. In S. Trepte & P. K. Masur (Eds.), The Routledge Handbook of Privacy and Social Media (pp. 110--124). Routledge.
    13. Amtsberg, F., Yang, X., Skoury, L., Ron, G., Kaiser, B., Sousa Calepso, A., Sedlmair, M., Verl, A., Wortmann, T., & Menges, A. (2023). Multi-Akteur-Fabrikation im Bauwesen. Bautechnik, 100(10), Article 10. https://doi.org/10.1002/bate.202300070
    14. Ðula, I., Berberena, T., Kepliner, K., & Wirzberger, M. (2023). Hooked on artificial agents: a systems thinking perspective. Frontiers in Behavioral Economics, 2, 1223281. https://doi.org/10.3389/frbhe.2023.1223281
    15. Erhard, L., & Heiberger, R. (2023). Regression and Machine Learning. In J. Skopek (Ed.), Research Handbook on Digital Sociology (pp. 129--144). Edward Elgar Publishing. https://www.e-elgar.com/shop/gbp/research-handbook-on-digital-sociology-9781789906752.html
    16. Runstedler, C. (2023). Alchemy and Exemplary Poetry in Middle English Literature. Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-26606-5
    17. Hagendorff, T., & Danks, D. (2023). Ethical and methodological challenges in building morally informed AI systems. AI and Ethics, 3(2), Article 2.
    18. Hagendorff, T., & Fabi, S. (2023). Methodological reflections for AI alignment research using human feedback. ArXiv, 1--9.
    19. Hagendorff, T., & Fabi, S. (2023). Why we need biased AI: How including cognitive biases can enhance AI systems. Journal of Experimental & Theoretical Artificial Intelligence, 1--14.
To the top of the page