From Hallucination to Trust: Leveraging Chain-of-Thoughts and Empathy in LLM-Empowered Agent Interactions
Abstract:
Large Language Models (LLMs) have significantly enhanced user interactions across various platforms, becoming indispensable in sectors like customer service for their immediate response capabilities and personalized assistance. Despite their transformative impact, LLMs occasionally produce hallucinations - factually incorrect, inconsistent, or entirely fabricated, which might undermine user trust. This study explores different dialogue strategies, such as Chain-of-Thought (CoT) as logical reasoning and empathy expressions, to mitigate these issues, and enhance the user experience of LLMs in critical interaction scenarios. The findings are expected to inform future designs of LLM architectures and interaction strategies that prioritize both accuracy and the emotional dynamics of user engagement, enhancing the overall acceptance and effectiveness of LLM technologies in everyday applications.