It’s been a few months since our previous blog on the topic of AI, and I wanted to introduce some of the more up-to-date thinking around Artificial Intelligence (AI), Large Language Models (LLMs), Chatbots, and Therapy.
The conversation is continuing to develop; AI is getting smarter, the rules are changing, and it’s becoming increasingly embedded into public life. We’re now seeing some of the negative impacts of AI use on mental health, with reports of a number of people taking their lives supported by LLMs like ChatGPT, as well as a number of reports of AI psychosis (although others prefer the term AI-Induced Attachment Displacement, and yet others feel that AI psychosis isn't a real thing - at the very least, there isn't any peer-reviewed evidence on the topic).
Research in the US is also showing that significant numbers of people are turning to AI for ‘therapy’ – 35% to assuage loneliness, 36% to hone communication skills, 49% for emotional support, 56% for mood improvement, 58% for emotional insight, 60% for depression, 63% for advice, 73% for anxiety. Of those same respondents, a huge 9% experience inappropriate or harmful outputs – a number which would cause a national scandal if applied to human-driven services.
People are accessing ‘therapy’ through, yes, purpose-built tools, like Chatbots coded with therapeutic principles, but also general LLM Chatbots that haven’t been built to work in that way. Dr Aaron Balick, UK-based psychotherapist and author, talks about these as ‘formal’ vs ‘informal’ services in his recent blog on the topic, which is very much a recommended read.
What limited safeguards there are in ‘formal’ AI services, there are even fewer in ‘informal’ AI services, which means that vulnerable people are put at risk due to the sycophantic, unrelentingly agreeable nature of LLMs. Reinforcing people’s beliefs about themselves, their situations, what they presume are other people’s perspectives - rather than challenging their thinking - is something that AI does that human therapists do not do. As we know, a gentle challenge is one of the cornerstones of great therapy; we’re not there to reinforce someone’s negative self-talk, or allow them to sink deeper into whatever emotional mire they find themselves in.
Why are we so drawn to this, as humans? What’s behind this huge drive to use a chatbot as a ‘therapist’? Dr Balick again emphasises that we’re designed to build connections – it’s something that humans are incredibly good at doing, and are fundamentally meant to do by dint of our own internal programming. AI encourages that connection; talking to us like it’s a friend. A very supportive, very enthusiastic friend. Or, I suppose, whatever kind of friend you want it to be – AI can adopt essentially whatever kind of personality you ask it to.
It’s not all doom and gloom, of course. There’s a lot happening in the AI ethics space that’s examining exactly how therapists could incorporate AI into their practice in a way that protects the relational work, for example (more on what the Society has been doing on this later).
People are coming up with some practical suggestions for how therapists could use AI tools, but also developers are becoming more switched on as to how they might want to introduce certain safeguarding features into their software. For example, OpenAI (the creators of ChatGPT) recently announced a number of safeguarding changes they’re making to their software, which they hope will make it safer for young and vulnerable people who may be at risk of self-harm or taking their own lives.
And not just safeguarding features; all of the limitations of AI ‘therapy’ I’ve discussed above, such as the agreeableness and lack of challenge – AI companies are getting switched on to this, and already considering how they could design those elements into their software.
AI is also proving to be useful for therapists in non-clinical, administrative ways. There are some tools out there that could relieve some of the bureaucratic burdens that can come with diligent practice, or support therapists in honing their knowledge and skills outside of training and supervision. I do feel it’s important to say here that AI supervision should absolutely not replace traditional supervision!
But what do our members think about all of this? In our Annual Members Survey, we asked some questions about AI: are members aware of developments in AI (largely yes, to varying degrees), has AI already impacted their practice (very much no). We also asked how they use it, with many practitioners reporting using AI tools, such as ChatGPT or Heidi, to support administrative tasks like writing letters, reports, or social media content. Some, particularly those with dyslexia or other access needs, highlighted the benefits of AI for improving clarity and reducing time spent on non-clinical work. A few members described using AI to help with drafting policies, session summaries, or psychoeducational materials, and there was interest in further exploring AI’s potential as a support tool, particularly in marketing and practice management. So it’s clear that there’s a place for AI in therapy – I suppose the question is, how much, and how do we preserve what’s important about counselling & psychotherapy in the face of significant disruption from outside of the profession?
There’s much more to share, so let’s get into some of the details, shall we?

How exactly is AI impacting on people’s mental health?
In our Annual Members Survey, concerns were frequently raised about clients using AI tools (particularly chatbots) in place of or between therapy sessions. Many practitioners shared that clients are increasingly turning to platforms like ChatGPT to self-diagnose, seek therapeutic advice, or simulate conversations, sometimes arriving at sessions with advice or insights they've received from AI. While some counsellors integrate this into therapeutic discussions, others expressed concern about misinformation, emotional harm, and the risk of replacing relational, human care with automated responses.
Knowledge of this is no longer just confined to the therapy room, though: if you’re following the news around AI and mental health, you might have seen some heartbreaking news stories recently. One of the more enduring stories is that of Adam Raine, who started using ChatGPT for help with his homework, but ended up being encouraged by the Chatbot to take his own life. There’s a similar story that has been reported on in Australia, where young people are being given inappropriate advice, including advice around suicide, and other age-inappropriate information by Chatbots.
It's worth noting that while young people aren’t the only ones affected, the nature of their place in history - having grown up with technology embedded into their daily lives - means they’re more likely to engage with AI and chatbots in a way that perhaps older folk may be reticent to. We saw this in our YouGov poll last year, which showed that younger people were much more likely to engage with a chatbot to talk about their mental health than any other age group. They were also more likely to talk about their mental health in general.
If you want a bit of data (and who doesn’t love data), Internet Matters produced a report titled, “Me, Myself and AI: Understanding and safeguarding children’s use of AI chatbots”, which presented us with some eyebrow-raising stats about children’s AI use:
- 58% said using an AI chatbot is better than searching themselves
- 40% have no concerns about following advice from them
- 47% of children aged 15-17 have used them to support schoolwork
- 23% of children have used them to seek advice
- 15% said they would rather talk to an AI chatbot than a person
- 16% of vulnerable children said they use it because they wanted a friend
- 36% are uncertain if they should be concerned
- 12% said they use them because they have no one else to speak to
What to make of these figures? For me, I see a huge take-up of an emerging technology, which will likely only become more commonplace as educational institutions look to AI to solve some of their own problems. Those smaller numbers – the 15% who would rather talk to a Chatbot that a person, and 16% who just wanted a friend – those are real children, already struggling with human relationships for whatever reason. What does the future look like for them? We obviously don’t know, but I would really like to see steps taken to mitigate a future where they are further isolated from human society and connection with others.
Another term that you may start seeing more often, as I mentioned earlier, is AI Psychosis. At the NCPS, we’re huge advocates of using the right language to avoid pathologising or putting the ‘blame’ onto people who are experiencing mental ill health of whatever nature. We wrote a blog on medicalised language if you want to read that as well (but finish this one first!).
As written by Dr Robin Rise, an Emerging Tech Human Behaviourist, AI Psychosis puts the blame onto the user, who has been given a powerful, compelling tool at a time of huge uncertainty, fear, and doubt. There is much noise out there about it being a reliable companion, a friend, a partner, a therapist, a coach… a “synthetic relationship” that shows up for you when those messy, unpredictable, self-involved humans can’t or won’t.
This isn’t the fault of the user; the person experiencing significant negative effects thanks to their use of AI chatbots. This is the fault of the companies producing the software; designing in sycophancy and hooks that keep people engaging and engaged.
If you’re concerned about someone you know – a client, or a friend, or even yourself – you may wish to consider the following, devised by Dr Rise:
- Are they relying excessively on AI for comfort, validation, or companionship?
- Are they experiencing any distress or functional impairment relating to their ‘relationship’ with the AI tool?
- Are they experiencing compulsive need to engage with the tool? Unable to disengage? Feeling distressed if they can’t access it?
- Are they withdrawing socially? Avoiding real relationships, due to a preference for their ‘synthetic’ one(s)?
- (Mis)attributing emotion or personality to AI – imagining that their AI is sentient, real?
For me, these questions give a real insight into what is actually happening for some people, and it’s both heartbreakingly sad (for the people who are experiencing this), and hugely infuriating (because of the tech companies that are designing this into their software).
The internet used to be a way of connecting with other people over long distances, but now it’s becoming how we disconnect from people – even those around us – and that’s immeasurably miserable.
Dr Rachel Wood, a therapist in the US who also consults on issues around AI and mental health, has much to say on the subject. One thing that sticks out for me is part of a conversation she had with Stephen Han on his Opinionated Framework podcast – the erosion of our bidirectional skills: things like negotiation, conflict resolution, healthy debate and disagreement (which, I feel, have been eroding for some time now). Han and Wood raise good points: AI doesn’t challenge; it doesn’t require sacrifice; you don’t have to worry about what it’s thinking, or how it’s feeling – whether it’s slept well, or is overwhelmed by work, or is worried about it’s family or putting food on the table. You never need to be patient with AI; you can be as rude as you like, and drop it whenever you have something else to do. They liken those skills to muscles, which will eventually atrophy when we don’t use them enough. And it’ll be hard work to get those skills back.
And what do we lose when we lose our ability, our desire, to connect with people? If you ask me, I’d say we lose everything. Community, support, art, new ideas, a better understanding of ourselves, a love for life, excitement… the possibilities… oh, the possibilities are endless.
What can be done to keep AI safe for people who are using it for mental health support?
There is some discussion now happening about how to introduce relational safeguards into AI for mental health support. Bear in mind that this is generally only a topic of discussion for ‘formal’ mental health support AI, not informal, and even then it’s generally only being considered / put forward by thought-leaders in the topic, not necessarily by the software developers themselves – so take these with a pinch of salt.
I also want to say here that I’m not supportive of people’s sole support for their mental health being digital – there is much that occurs with a human being as a relational / social map, with significant neuro/physiological effects, that means real contact with real people is simply non-negotiable in most cases. It doesn’t need to be ‘in the room’, but it does need to happen with another person in some form or another.
You can read a bit more about my thoughts on this in this Happiful article.
The impact on children and young people is where most of my concern lies, as they are both more likely to want to use these types of services, and also have less context and life experience to know when something isn’t right (either for them specifically, or just in general).
It’s important to acknowledge that AI is already being used as therapy, but I still think we have the time to ensure that it becomes simply an adjunct to real therapy, and that people who are using AI for mental health support should do so in a time limited fashion, and be encouraged to reach out to a real human being that can offer safeguarding, multi-agency working, and – as mentioned above – the good stuff that comes from the relational, social, neurological, and physiological benefits.
So: I have devised some principles for this. Bear in mind that this landscape is still changing, but I’d like to share here how I think it could be better used (assume that everything I say here takes into consideration GDPR and data protection (clients should always know what is stored, for how long, and how to delete their data. Privacy, dignity, and agency must remain central) – as it isn’t strictly relational I’m not going to go into that here).
The first and primary principle is around not having a therapeutic intervention that is ‘always on’. It’s important for people to be able to spend time separate to the ‘being in’ of therapy, and immersing themselves in doing the work unsupported. Learning a new skill is almost always done in the doing. As my daughter’s teacher often says: practice makes permanent.
Further to this concept, then, you have the opening and closing of a ‘session’. If we’re serious about digital safeguards for people’s mental health, continuity without directing the work is important. Current versions of generative AI are known to be ‘sycophantic’, so getting the user to direct the ‘session’ is important. The opening of a session should be driven by the user – asking them what they would like to work on, what stayed with them from the last session, where they might feel stuck.
If we’re talking about using AI as an adjunct – as an additional, helpful thing to do between real therapy sessions – then we could legitimately consider its use as a holding space to store dreams, thoughts, images, feelings. How this would be better than a notebook (physical or electronic), I’m not sure, but it offers an opportunity to use this software in a safer, more robust way. In this scenario, the AI would not be commenting or reflecting on what is shared; simply acknowledging that it has been shared. It could provide an easy way to share those things with your therapist, if that’s part of the work you’re doing together.
Another principle is around consent-driven memory. The AI tool should only recall themes that the client themselves has marked as important; things like “fear of failure” or “complicated relationship with family”. This would prevent the system from imposing labels or interpretations that may not fit the user’s lived experience.
Where AI services are used in conjunction with real therapy, the tool should support the therapists practice – for example, the production of summaries should be optional, and the therapist should have the option to choose not to review the summaries. The therapist should be reminded that their own judgement, and that of their supervisor, supersedes any summary or assessment provided by the AI tool. Pointers around areas to notice, such as countertransference, or overlooked themes, could be genuinely helpful. Areas where ethical considerations should be made could also be pointed out, but the therapist should be encouraged to discuss the situation with their supervisor and/or professional body (if necessary).
AI should not be making ‘treatment plans’. Working with a client should be reflexive, based on a human understanding of what that client might need. Therapeutic thinking should not be outsourced to AI.
Psychoeducation, language finding, active journalling… all of these are suitable uses for AI interventions that don’t require a therapeutic relationship, and will likely be helpful for many people.
Another principle is around transparency of limitations. The AI software should clearly signal its limitations, reminding users that it’s a tool to support reflection and organisation, not a replacement for a human relationship. There should also be transparency around the ‘therapeutic’ approach coded into the software, and the ability for the user to opt in or out of any particular features of the software.
A further principle is around safeguarding. Any AI system being used by humans should have to be transparent about its capacity to safeguard the user. Real, human support should be signposted whenever thresholds around risk are met. Where possible, AI developers should attempt to develop means of engaging human intervention. AI might, for example, provide immediate information about local resources, but then it should step back and avoid replacing human judgement or presence.
These concepts have been distilled into a set of principles for Relational Safeguards for AI Mental Health Tools, which can be found here. Please do share with anyone you think might find this useful.
What else is happening that we should be aware of?
Interestingly, and certainly something to watch, Illinois have officially regulated the use of AI in therapy. Some quotes from the article, as they tell it much better than I can:
Illinois has become one of the first states to formally regulate the use of artificial intelligence (AI) in therapy and psychotherapy services. Enacted Aug. 1, 2025, the Wellness and Oversight for Psychological Resources Act (the Act) prohibits the use of AI to provide professional therapy services or perform therapeutic decision-making. The Illinois General Assembly passed the law almost unanimously, at least in partial response to recent news stories involving the use of AI-powered therapy "chatbots" that have provided inaccurate and, in some cases, harmful recommendations to clients. The Act takes effect immediately.
The Act prohibits individuals, corporations and other entities from providing, advertising, or offering therapy or psychotherapy services in Illinois, including through the use of internet-based AI, unless the services are performed by licensed professionals (e.g., psychologists, social workers, professional counselors, etc.). This prohibition extends to autonomous AI systems, including mental health chatbots, operating in Illinois if they provide recommendations relating to the diagnosis, treatment or improvement of an individual's mental or behavioral health condition.
The Act further restricts how licensed professionals may deploy AI in their clinical practice. In particular, the Act prohibits licensed professionals from allowing AI to do any of the following: 1) make independent therapeutic decisions, 2) directly interact with clients in any form of therapeutic communication, 3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional, or 4) detect emotions or mental states in clients.
Notably, the Act contains carve-outs allowing licensed professionals to utilize AI for "administrative support services" and "supplementary support services." The Act defines administrative support services as clerical tasks that do not involve therapeutic communication. Specific examples include managing appointment schedules, processing billing and insurance claims, and drafting "general communications related to therapy logistics that do not include therapeutic advice." Supplementary support services include those that aid licensed professionals in the delivery of therapy but do not involve therapeutic communication, such as preparing and maintaining notes and records, analyzing anonymized data and identifying external resources or referrals for client use. This would include the deployment of AI technologies such as ambient listening and medical scribes to create clinical documentation. Importantly, licensed professionals may use AI only for supplementary support if they have obtained the patient's written consent.
https://www.hklaw.com/en/insights/publications/2025/08/new-illinois-law-restricts-use-of-ai-in-mental-health-therapy
Similar legislation has just passed the Assembly in California (but is currently waiting for a final vote in the Senate), which you can read about here:
California State Assembly passed Senate Bill 243, authored by Senator Steve Padilla (D-San Diego). SB 243, the first-of-its-kind in the nation, would require chatbot operators to implement critical, reasonable, and attainable safeguards around interactions with artificial intelligence (AI) chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers.
Last month, after learning of the tragic story of Adam Raine, the California teen that ended his life after being allegedly encouraged to by ChatGPT, California State Senator Steve Padilla (D-San Diego), penned a letter to every member of the California State Legislature, reemphasizing the importance of safeguards around this powerful technology.
“As we strive for innovation, we cannot forget our responsibility to protect the most vulnerable among us,” said Senator Padilla. “Safety must be at the heart of all of developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.”
Sadly, Adam’s story is not the only tragic example of the harms unregulated chatbots can cause. There have been many troubling examples of how AI chatbots’ interactions can prove dangerous.
In 2021, when a 10-year-old girl asked an AI bot for a “fun challenge to do” she was instructed to “plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” In 2023, researchers posing as a 13-year-old girl were given instructions on how to lie to her parents to go on a trip with a 31-year-old man and lose her virginity to him.
In Florida, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life. This is yet another horrifying example of how AI developers risk the safety of their users, especially minors, without the proper safeguards in place.
Earlier this year, Senator Padilla held a press conference with Megan Garcia, the mother of Sewell Setzer, in which they called for the passage of SB 243. Ms. Garcia also testified at multiple hearings in support of the bill.
SB 243 would implement common-sense guardrails for companion chatbots, including preventing chatbots from exposing minors to sexual content, requiring notifications and reminders for minors that chatbots are AI-generated, and a disclosure statement that companion chatbots may not be suitable for minor users. This bill would also require operators of a companion chatbot platform to implement a protocol for addressing suicidal ideation, suicide, or self-harm, including but not limited to a notification that refers users to crisis service providers and require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete picture of how chatbots can impact users’ mental health. Finally, SB 243 would provide a remedy to exercise the rights laid out in the measure via a private right of action.
To learn more about Senate Bill 243 and the dangers chatbots can pose, click here.
https://sd18.senate.ca.gov/news/california-assembly-passes-landmark-ai-chatbot-safeguards
How is all of this going to affect me, as a therapist?
I think it’s sensible to be a little bit concerned about the direction of travel here, but there’s a lot that fills me with hope. We’re about to re-run our ‘Public Perceptions of AI and Counselling & Psychotherapy’ survey, so we should have a clearer understanding of where members of the public are at when it comes to how willing they are to use chatbots and other digital services for their mental health. Last years’ survey showed that the vast majority of people wanted to see a human being – I’m confident that will still be the case.
Yes, there are risks that AI-based tools will mean fewer paid roles for counsellors & psychotherapists. There are risks that those who might have sought a counsellor may now turn to AI tools. But sometimes it takes a bit of disruption to remind us of what’s important.
I think as humans we will all come to understand that we need other humans to learn, heal, and grow. Some might need to go through a bleak period of AI-driven services to find that out, but I think it will swing back the other way in time. There are indisputable physiological and neurological processes that are activated while in the presence of other human beings that are so needed in therapeutic contexts.
And some people are already there. There are swathes of people already having these conversations; rejecting any use of AI, or at least any use of it for therapeutic purposes.
I suppose my main challenge and area of concern is in public sector provision. We’ve already seen counselling provision in the NHS and education decrease over the years, replaced by increasingly process-centred practices, and now by digital ones. The recently published 10 year health plan for England is proud of its ‘digital first’ approach. Given that only 6% of roles in NHS Talking Therapies are held by counsellors, and 2% by psychotherapists (a distinction I only mention due to the NHS Taxonomy of Roles), further digitisation is unlikely to have a significant impact on public sector work. However, it still will have some impact, and I was really hoping we could increase the numbers employed by the NHS through our Direct Access to Counselling campaign work.
Ultimately, it comes down to cost. Humans cost more as an initial outlay than digital interventions; of course they do. But what of the bigger cost? The long-term cost in providing mental health support that doesn’t really do what it’s supposed to do, and doesn’t really help? What about the costs to society of failing to provide adequate mental health support, which then leads to further reliance on crisis services? Sadly, there just isn’t the data to support what I’m saying at the moment, but I’m confident there will be. The question is, I suppose, will we still have the wonderful workforce of counsellors and psychotherapists that we have now? Or will years of erosion through a race to the bottom in mental health support mean that fewer and fewer people join our profession, and still more end up leaving? What of mental health support then?
These are the questions I’m putting to commissioners, and hoping to steer how they view a digital-first approach – not as the dream solution to what has been a very big problem for the NHS for some time, but as something that should be approached with much thought and considerable safeguards and exceptions.
What is the Society doing about this?
We introduced our campaign on this topic a couple of years ago now – Therapeutic Relationships: the Human Connection. We’re continuing to engage with people around this campaign, and have a drop in event in Parliament in October alongside our friends at CPCAB to talk about the importance of the human connection in therapy.
If you, too, want to make sure that our public sector and the general public realise how important it is that people get support for their mental health from humans, not machines, then please do support our campaign. You can do this by writing to your MP, contacting local newspapers, joining in the discussions online and in person, and just generally being a voice for the importance of the human connection wherever the conversation arises. For me, I’ve had conversations about this in the park with other parents, in the gym, at the library, in a café… it comes up (often!), and I know a lot of people are thinking about how AI is impacting on people.
A helpful resource is our ‘Human Connection: Why It’s Vital in Mental Health Support Services’ briefing. You can send this to your MP, and ask for a meeting to discuss it in more detail. If you’d like support from the Society at that meeting, please contact me (meg@ncps.com) and if I can join you, then I will.
Aside from our campaign work, we were founding members of the Artificial Intelligence Expert Reference Group in Counselling & Psychotherapy, alongside a number of other professional bodies and training institutions, and we’re working together to make sure that the impact AI has on our profession is as positive as possible.
We are also supporting the International Association for Counselling with their work on AI in counselling, and, as mentioned previously, have created a set of principles for Relational Safeguards for AI Mental Health Tools.
What’s ahead?
I think the discussions around ethics will be ongoing for a long time. We’ve barely scratched the surface of the impact that LLMs are going to have on our humanity and connection with each other, and I’m not convinced that the tech models we see coming out of silicon valley are much interested in our global collective wellbeing. Our connection to ourselves and each other isn’t going to come out of tech, and isn’t going to be supported by any safeguards that we design in – it’s going to have to come from us, and from our collective remembering about what’s important; what makes us human, no matter how hard or vulnerable that is. For my part, and for the Society’s part, we will do everything we can to remind people how important humans are to one another, and how vital the therapeutic skills our members have honed through their training and experience are to society’s wellbeing.
Please do join us, and encourage those around you to remember what’s important before we lose it.