Publication and Presentation at COLING2025

Our findings on the question How well can LLMs reflect? through a combination of extensive human evaluations and experimentation with various prompting strategies, has been published in the main proceedings of the 31st International Conference on Computational Linguistics. The publication also gained a spot in the conference for an oral presentation, which is going to take place on January 21, 2025, in Abu Dhabi, United Arab Emirates.

Read the paper at https://aclanthology.org/2025.coling-main.135.pdf

Title

How Well Can Large Language Models Reflect? A Human Evaluation of LLM-generated Reflections for Motivational Interviewing Dialogues

Abstract

Motivational Interviewing (MI) is a counseling technique that promotes behavioral change through reflective responses to mirror or refine client statements. While advanced Large Language Models (LLMs) can generate engaging dialogues, challenges remain for applying them in a sensitive context such as MI. This work assesses the potential of LLMs to generate MI reflections via three LLMs: GPT-4, Llama-2, and BLOOM, and explores the effect of dialogue context size and integration of MI strategies for reflection generation by LLMs. We conduct evaluations using both automatic metrics and human judges on four criteria: appropriateness, relevance, engagement, and naturalness, to assess whether these LLMs can accurately generate the nuanced therapeutic communication required in MI. While we demonstrate LLMs’ potential in generating MI reflections comparable to human therapists, content analysis shows that significant challenges remain. By identifying the strengths and limitations of LLMs in generating empathetic and contextually appropriate reflections in MI, this work contributes to the ongoing dialogue in enhancing LLM’s role in therapeutic counseling.

Reference

  • Basar, E., Sun, X., Hendrickx, I., de Wit, J., Bosse, T., de Bruijn, G., Bosch, J. & Krahmer, E. (2025). How Well Can Large Language Models Reflect? A Human Evaluation of LLM-generated Reflections for Motivational Interviewing Dialogues. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1964-1982. Association for Computational Linguistics.

  • @inproceedings{basar2025reflect,
      author = {Basar, Erkan and Sun, Xin and Hendrickx, Iris and {de Wit}, Jan and Bosse, Tibor and {de Bruijn}, Gert{-}Jan and Bosch, Jos and Krahmer, Emiel},
      title = {How Well Can Large Language Models Reflect? A Human Evaluation of LLM-generated Reflections for Motivational Interviewing Dialogues},
      booktitle = {Proceedings of the 31st International Conference on Computational Linguistics},
      year = {2025},
      publisher = {Association for Computational Linguistics},
      pages = {1964--1982},
      url = {https://aclanthology.org/2025.coling-main.135/}
    }
    

Images


Find and like this news on LinkedIn