(2025) Bucher - LLMs in Mental Health Care
Published journal paper on LLMs and mental health care.
“It’s Not Only Attention We Need”: Systematic Review of Large Language Models in Mental Health Care
Abstract
Background: Mental health care systems worldwide face critical challenges, including limited access, shortages of clinicians, and stigma-related barriers. In parallel, large language models (LLMs) have emerged as powerful tools capable of supporting therapeutic processes through natural language understanding and generation. While previous research has explored their potential, a comprehensive review assessing how LLMs are integrated into mental health care, particularly beyond technical feasibility, is still lacking.
Objective: This systematic literature review investigates and conceptualizes the application of LLMs in mental health care by examining their technical implementation, design characteristics, and situational use across different touchpoints along the patient journey. It introduces a 3-layer morphological framework to structure and analyze how LLMs are applied, with the goal of informing future research and design for more effective mental health interventions.
Methods: A systematic literature review was conducted across PubMed, IEEE Xplore, JMIR, ACM, and AIS databases, yielding 807 studies. After multiple evaluation steps, 55 studies were included. These were categorized and analyzed based on the patient journey, design elements, and underlying model characteristics.
Results: Most studies assessed technical feasibility, whereas only a few examined the impact of LLMs on therapeutic outcomes. LLMs were used primarily for classification and text generation tasks, with limited evaluation of safety, hallucination risks, or reasoning capabilities. Design aspects, such as user roles, interaction modalities, and interface elements, were often underexplored, despite their significant influence on user experience. Furthermore, most applications focused on single-user contexts, overlooking opportunities for integrated care environments, such as artificial intelligence–blended therapy. The proposed 3-layer framework, which consists of the L1: LLM layer, L2: interface layer, and L3: situation layer, highlights critical design trade-offs and unmet needs in current research.
Conclusions: LLMs hold promise for enhancing accessibility, personalization, and efficiency in mental health care. However, current implementations often overlook essential design and contextual factors that influence real-world adoption and outcomes. The review underscores that the self-attention mechanism, a key component of LLMs, alone is not sufficient. Future research must go beyond technical feasibility to explore integrated care models, user experience, and longitudinal treatment outcomes to responsibly embed LLMs into mental health care ecosystems.