Time: | March 26, 2025, 2:00 p.m. – 3:00 p.m. |
---|---|
Venue: | Room 131 (UN 32.131), ground floor Universitätsstr. 32 (entrance via Universitätsstr. 34) Campus Vaihingen |
Download as iCal: |
|
Today's large language models (LLMs) excel at providing convincing answers across a broad spectrum of inquiries, with their conversational capabilities enabling them to closely align with users' needs. However, this adaptability is beneficial only to the extent as models remain robust to adopting wrong statements from user inputs.
Recent research demonstrates that even advanced models like GPT-4 can shift from initially correct answers to incorrect ones during multi-turn conversations, solely due to user input. In my talk, I will present my research exploring how uncertainty in LLMs can provide insights into their robustness against persuasion, while highlighting the specific challenges of quantifying uncertainty in these models.
The lecture is held in English.
Afterwards we invite you to a relaxed discussion over finger food and coffee.
We send out a newsletter at irregular intervals with information on IRIS events. To make sure you don't miss anything, simply enter your e-mail address. You will shortly receive a confirmation e-mail to make sure that you really are the person who wants to subscribe. After receiving your confirmation, you will be added to the mailing list. This is a hidden mailing list, which means that the subscriber list can only be viewed by the administrator.
Note: It is not possible to process your subscription to the newsletter without providing your e-mail address. The information you provide is voluntary and you can unsubscribe from the newsletter at any time.