Abstract
Despite attempts to curb this behaviour by platform providers and regulators, people are turning to conversational LLMs for support, sometimes with fatal consequences. We explore use of LLMs during major life transitions, with ending romantic relationships as a case study. We present the media coverage of, and preliminary research, on this phenomenon and discuss likely risks. We then present an exploratory analysis of English-language human-LLM conversations in two existing datasets to illustrate these risks. We find that the advice people receive from LLMs puts them at risk of harm, particularly those seeking to end relationships involving intimate partner violence (IPV). We end with recommendations for platform providers and users, for example advising providers to work with experts in IPV to ensure advice does not endanger users at an incredibly vulnerable moment in their lives. We call for more research into isolation, IPV and LLM use. This paper discusses IPV including mention of specific violent acts.