CAISA Lab

News

Blogs

2024

Florian Mai joins CAISA group

CAISA group today welcomes our new member Florian Mai working as a postdoc researcher. More about Florian.


Allison Lahnala successfully defends her PhD dissertation

Today, our group member Allison Lahnala successfully defends her PhD dissertation.

Congratulations!


CAISA group welcomes Tianyi Zhang

Tianyi Zhang from University of Pennsylvania is visiting CAISA group in the next three months. A brief introduction of Tianyi: “I am passionate about building intelligent agents that emulate human understanding and reasoning of world events. In contrast to human learning, which assimilates and accommodates information into brain schemas, a significant challenge with current Language Models (LMs), including the SOTA GPT-4, is their inability to automatically acquire and anchor structured knowl…


Lamarr NLP Researchers Train Multilingual Large Language Models Mitigating Stereotype Bias

Bias in large language models is a well-known and unsolved problem. In our new paper “Do Multilingual Large Language Models Mitigate Stereotype Bias?” we address this challenge by investigating the…


Perspective Taking through Generating Responses to Conflict Situations

Despite the steadily increasing performance language models achieve on a wide variety of tasks, they continue to struggle with theory of mind, or the ability to understand the mental state of other…


David Kaczér joins CAISA group

CAISA group today welcomes our new member David Kaczér working as a PhD student. More about David.


Frederik Labonte joins CAISA group

CAISA group today welcomes our new member Frederik Labonte working as a researcher. More about Frederik.


Joan Plepi successfully defends his PhD dissertation

Today, our first group member Joan Plepi successfully defends his PhD dissertation.

Congratulations!

More photos here.


CAISA group welcomes Dr. Cass Zhixue Zhao

Dr. Cass Zhixue Zhao from School of Computer Science, University of Sheffield will visit CAISA group till September 30th. She is a lecturer in Natural Language Processing in the Department of Compu…


NAACL2024 was great!

NAACL

Our group contributed with three works: (1) Scaling annotator modeling (2) group fairness preservation under differential privacy: https://lnkd.in/e8Kh4Y9n (3) modeling information spreader behavi…


Unveiling Information Through Narrative In Conversational Information Seeking

We, as humans, have the ability to create and communicate narratives. We use narratives to make sense of the world around us, share knowledge, and solve complex problems. This got us wondering: How…


Four papers accepted at LREC-COLING 2024!

LREC-COLING empathy DeFaktS style-transfer

What a great start of the year! We are very excited to announce that we got four paper accepted at LREC-COLING 2024.

“Appraisal Framework for Clinical Empathy: A Novel Application to Breaking Bad …


CAISA Highlights 2023!

Highlights 2023 Journey

Innovations, Insights, and Inspirations: Our Journey Through 2023

Hi everyone! As we say goodbye to a fantastic year full of new discoveries and fun research, we at the Conversational AI and Socia…


Senior Postdoctoral Researcher (E14 100%)

Jobs Senior Postdoctoral Researcher

This exciting Full-Time Senior Postdoctoral Researcher (E14 100%) in Natural Language Processing and Machine Learning. You will be part of the Lamarr Institute for Machine Learning and Artificial I…

2022

Unifying Data Perspectivism and Personalization:An Application to Social Norms

EMNLP2022 Data Perspectivism Personalization Social Norms

We are super excited to present our paper “Unifying Data Perspectivism and Personalization:An Application to Social Norms” from Joan Plepi, Béla Neuendorf, Lucie Flek, Charles Welch at EMNLP 2022 main conference.

In this work, we use English textual data in the form of posts from the website, Reddit, about social norms from the subreddit amitheasshole (AITA). As shown in the figure, users of this online community post descriptions of situations, often involving interpersonal conflict, and as…


Temporal Graph Analysis of Misinformation Spreaders in Social Media

TextGraphs-16 COLING2022 Misinformation Spreaders Temporal Graph Analysis Social Media

We are super excited to present our paper “Temporal Graph Analysis of Misinformation Spreaders in Social Media” from Joan Plepi, Flora Sakketou, Henri-Jacques Geiss, Lucie Flek at the workshop TextGraphs-16 COLING 2022.

Proactively identifying misinformation spreaders is an important step towards mitigating the impact of fake news on our society, especially with the events nowdays. The impact of time on fake news prediction has made the task even more challenging, as the content-based diffe…


IT Summer School Women4women

Summer School Women4women CAISA Marburg

In the last week before the return to schools, 28.08. – 03.09.2022, CAISA Lab has organized the IT Summer School for high school female students that are passionate about mathematics and/or computer science. The classes were designed and held by our female AI researchers*, with the rest of our lab members** contributing to the preparation and organization. The event was generously supported by Hessian.AI.

Participants of the IT Summer School, coming from various corners of Ger…


Interview with Joan Plepi, a PhD candidate in deep learning for NLP

Joan Plepi Interview PhD candidate

Today we interview Joan Plepi, a PhD candidate researcher in our lab, focusing on user personalization techniques and applying them to Natural Language Processing problems on social me…


FACTOID - A New Dataset for Identifying Misinformation Spreaders and Political Bias

LREC Fake news spreader detection Fake news and political bias dataset Reddit dataset Fine-grained annotations

We are super excited to present our paper “FACTOID: A New Dataset for Identifying Misinformation Spreaders and Political Bias” from Flora Sakketou, Joan Plepi, Riccardo Cervero, Henri-Jacques Geiss…


LREC 2022 Investigating User Radicalization - A Novel Dataset for Identifying Fine-Grained Temporal Shifts in Opinion

LREC Opinion Dynamics Stance Detection Dataset Sociopolitical Language Dataset with Temporal

We are very excited to present our all-female-author paper “Investigating User Radicalization - A Novel Dataset for Identifying Fine-Grained Temporal Shifts in Opinion” from Flora Sakketou, Allison…


Interview with Dr. Flora Sakketou, postdoctoral researcher in deep learning for NLP

Flora Sakketou Interview Postdoctoral researcher

Today we interview Flora Sakketou, a postdoctoral researcher in our lab, focusing on developing deep learning optimization algorithms and applying them to Natural Language Processing p…

2021

WiNLP @EMNLP 2021 Automated Template Paraphrasing for Conversational Assistants

Conversational Assistants Automated Template Paraphrasing

We are excitied to present our paper “Automated Template Paraphrasing for Conversational Assistants” from Liane Vogel and Lucie Flek at Widening NLP EMNLP 2021.

In this paper, we explore the usage of automatic paraphrasing models such as GPT-2 and CVAE to augment template phrases for task-oriented dialogue systems while preserving the slots. Additionally, we systematically analyze how far manually annotated training data can be reduced.

We extrinsically evaluate the performance of a natura…


EMNLP 2021 HYPMIX, Hyperbolic Interpolative Data Augmentation

EMNLP Data Augmentation Riemannian Hyperbolic

We are looking forward to present our paper “HYPMIX: Hyperbolic Interpolative Data Augmentation” from Ramit Sawhney, Megh Thakkar, Shivam Agarwal, Di Jin, Diyi Yang, Lucie Flek at EMNLP 2021.

In this paper we propose HypMix, a novel model-, data-, and modality-agnostic interpolative data augmentation technique operating in the hyperbolic space, which captures the complex geometry of input and hidden state hierarchies better than its contemporaries.

We devise a novel Möbius Gyromidpoint Lab…


EMNLP 2021 Perceived and Intended Sarcasm Detection with Graph Attention Networks

EMNLP User Representation Sarcasm Detection Social Graph Graph Attention Network

We are looking forward to present our paper, “Perceived and Intended Sarcasm Detection with Graph Attention Networks” from Joan Plepi and Lucie Flek in Findings EMNLP 2021.

In this work, we propose a framework jointly leveraging (1) a user context from their historical tweets together with (2) the social information from a user’s conversational neighborhood in an interaction graph, to contextualize the interpretation of the post. We use graph attention networks (GAT) over users and tweets in…


NAACL 2021 Suicide Ideation Detection via Social and Temporal User Representations using Hyperbolic Learning

NAACL User Representation Temporal Modeling Social Graph Hyperbolic Learning Suicide Ideation

We are looking forward to present our paper “Suicide Ideation Detection via Social and Temporal User Representations using Hyperbolic Learning” from Ramit Sawhney, Harshit Joshi, Rajiv Ratn Shah a…


ESWC2021 Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs

ESWC Conversational Question Answering

We are delighted that the paper “Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs” from Joan Plepi, Endri Kacupaj, Kuldeep Singh, Harsh…


Two papers with contributions of CAISA Lab members accepted to EACL 2021

EACL User Representations Conversational Question Answering

What a great start of the year! We are delighted that two research full-papers with the contributions of our lab members have been accepted to EACL ‘21, the 16th Conference of the European Chapter …

2020

BMBF funds Lucie Flek to establish an AI research group on Dynamically Social Discourse Analysis

DynSoDa Social Media User Representations BMBF

Prof. Dr. Lucie Flek receives a grant of over 1 Mil. Euro from the Federal Ministry of Education and Research (BMBF) for establishing an Independent Research Group on her project DynSoDA: Dynamically Social Natural Language Processing for Online Discourse Analysis. The 4-year project is a part of the BMBF support program for young researchers working in the field of Artificial Intelligence.

Lucie Flek points out that one of the challenges of today’s NLP models is that they typically assume o…

News