NCPS | AI in Counselling and Psychotherapy

AI in Counselling and Psychotherapy

In an age where technology is evolving more rapidly than many of us can keep up with, the field of counselling and psychotherapy is grappling with a hugely transformative force: Artificial Intelligence (AI), which is already - at this early stage - having a seismic impact on the profession, and bringing up huge existential questions: what makes good therapy? Where is the value in seeing a human being over a chatbot? Is the future of humans providing therapy bleak, or is there still hope that humanity will continue to need and value the relational and human-focused nature of therapy?

Many practitioners may have a basic understanding of AI, perhaps having heard of generative AI platforms like OpenAI's ChatGPT or Google's Bard (and now Gemini), and some may have heard of it in passing and given it little thought, while yet others are already trying to understand how they can incorporate AI into their practice to solve particular challenges or relieve them of, what they see to be, burdensome tasks.

At the Society, we're acutely aware of AI's multifaceted role in talking therapy, but our collective understanding of its potential impact on the talking therapy profession is still unfolding.

Using AI: What's Happening Now?


AI's influence in therapy goes beyond help with admin, or therapy chatbots. At a foundational level, yes, AI can assist therapists in managing admin tasks, such as scheduling and client records, enhancing efficiency and allowing more time for client care. Taking it a step further, though, AI chatbots and virtual assistants are now also being used to offer initial support and triage advice, which many - from individual practitioners to larger organisations - see as providing a really valuable first layer of assistance. A number of therapists have also reported using generative AI to support them with ethical quandaries between supervision sessions, or to coach them in a greater understanding of theoretical models.

Some therapists are also understood to be using chatbots, or AI-assisted therapy, which involves interactions with AI systems or applications, to provide additional support to their clients, for example while they're unavailable, or for simply additional support outside of sessions.

On the surface these appear to be really beneficial ways of using AI, but it's important to acknowledge that they also come with their own - not insignificant - risks. We'll look at some of the risks of integrating AI into therapeutic practice further on, but to start it's important to see how the relationship between AI and talking therapy has evolved over time to give us some context to how we've come to where we are now.

Talking to the Robots: A Brief History Lesson


AI in therapy, or at least adjacent to therapy, is not an entirely new thing - it's developed over time, from the most rudimentary of chat bots, through to simple data analysis tools that helped in identifying patterns in client sessions, and has evolved more recently to sophisticated AI models capable of engaging in basic "therapeutic" conversations. How AI is used in therapy has changed significantly even over just the last couple of years, and it's hard to predict what changes the coming years will bring. Recent developments, for example, include AI tools that can analyse speech patterns and facial expressions during virtual therapy sessions, which could give therapists additional insights into a client's emotional state. I don't know if you're as terrified by the thought of this as I am, even just as a concept, but I've read perhaps too much dystopic sci-fi to be entirely comfortable with this idea. But anyway...

The very first baby steps of using AI as a way of conversing goes all the way back to the early 1960s, with the development of ELIZA: a pioneering chatbot developed by Joseph Weizenbaum. While ELIZA's conversational capabilities were incredibly basic, it was the start of a conversation (no pun intended) around how humans can, and may one day wish to, communicate with machines.

During the 1990s and early 2000s, we were introduced to the concept of using AI for data analysis. AI-powered tools emerged within healthcare spaces, which analysed patient and client data to identify patterns and generate insights. These tools brought with them a huge paradigm shift in how we as a society viewed therapy, as we were able to collect quantitative data around client progress and treatment effectiveness, which has allowed for more process-centred modalities such as CBT to flourish.

In the mid-2000s, we started to see AI-powered Socratic chatbots - some may remember the oft-frustrating Jabberwacky, as an early example. You could, if you wanted to, engage in a somewhat structured dialogue with these chatbots, which gave those who used them in this way the space to reflect on their thoughts, feelings, and behaviours. While these chatbots could absolutely not replicate the depth and nuance of human conversation, not even close, they could provide a non-judgmental platform for people to explore their inner world.

With the rise of smartphones came virtual assistants - now household names - like Siri and Alexa, as well as other mobile apps that people can use to manage and keep track of every facet of life. Within these, you can find convenient and accessible platforms for people to find guidance, access mental health resources, and track progress. While their therapeutic capabilities are limited, they have acted as something of a stepping stone towards more personalised and accessible mental health support. Many of us have now also become familiar with conversing with our AI assistants in our home; how big would the leap be now to converse with our AI therapist?

In recent months, AI has taken a huge leap forward in its ability to actually engage in therapeutic-like conversations, analyse speech patterns, facial expressions, and other non-verbal signals. AI models are now capable of conducting initial screening, having basic counselling-esque conversations, and even offering personalised advice on practically anything. These advancements, whether we like it or not, have opened up new avenues for expanding access to mental health support, particularly in underserved areas or for people with limited mobility. And they're constantly developing; history is being written as we speak.

AI & Ethics


I'm not here just to talk about the existential issues we're facing, though; there are also a number of ethical ones (yes, I am fun at parties), and as we all rush to adopt this new technology it's important to be mindful about how we do so ethically. As AI becomes more integrated into therapy, we're already seeing a number of ethical issues arising. Things such as:

  • Data Protection and Confidentiality: The data that we hold - as therapists - about our clients is some of the most sensitive data we could possibly hold about a person. It follows, then, that the confidentiality and security of client data in AI systems is of the absolute highest importance. Making sure that our client's sensitive information is protected, not just in compliance with data protection laws and ethical standards, but above and beyond that where possible, is vital.
  • Decision-Making and Autonomy: If we're using AI in therapeutic decision-making, it should be to complement, not override, our expertise and autonomy. While AI can provide valuable insights based on data analysis, the final decision-making about any client must remain with the human therapist.
  • Therapist-Client Relationship: The therapeutic relationship is built on trust, empathy, and understanding – qualities that are inherently human and (currently...?) beyond AI's capabilities. Preserving the human element in therapy is crucial.


Let's look at some potential scenarios. Imagine, for example, you're using a service that integrates some AI tools within in your therapy practice. It stores everything from session notes to personal client details, even offering some analytical insights. But then you learn of a cyberattack leaving all that sensitive data exposed. You're required by law to let the client know that this has happened, and when you do the impact it has on their mental health and on the therapeutic relationship is significant. They have no idea who now can see this data about them, or what they might choose to do with it.

This isn't just a breach of data protection; it's a fundamental violation of the trust your clients place in you. You learn that the security of client data in AI systems needs to be ironclad; you start to pay more attention to the services and tools you use, and you pay closer attention to their data protection policies. The damage for your client, though, has already been done.

As therapists, we must ensure that our AI tools are not just compliant with privacy regulations but are fortified with the most robust cybersecurity measures available. It's about going above and beyond to protect our clients and the worlds and stories they share with us.

Another example - you may decide to use a generative AI platform to analyse and summarise your client's session notes. You feel that the AI might be able to provide some insights, or highlight patterns or issues that you've overlooked. You perhaps want to save yourself some time.

By inputting your client's confidential therapy notes into the AI platform, you risk breaching confidentiality. You have no idea how secure the platform is, or whether that data could be access by unauthorised parties. Has your client consented to their data being used in this way? Ethically, clients should be informed about how their data is used and consent to such uses, especially when involving third party platforms.

Let's say that your client is happy for you to use their data in this way, and understands the risks - it's important not to rely on the AI analysis. The accuracy and interpretation of data by AI can vary, and there's a risk of misinterpretation or oversimplification of complex human emotions and experiences.

Or another example: you use an AI tool that analyses your client's facial expressions and tone of voice. The tool suggests they might be depressed, but you know they've recently lost someone close to them and they're grieving. You're faced with a choice – trust the machine's data-driven conclusion or rely on your own expertise and understanding of your client's unique context. While AI can offer valuable insights, the ultimate decision-making power must remain firmly in human hands. AI is here to complement our expertise, not to override it.

One final example: envision a therapy clinic where an AI chatbot handles initial screenings and basic cognitive exercises. Convenient and non-judgmental, the chatbot becomes a hit with clients. But there's a catch. Some clients start preferring the AI's 24/7 availability over human interaction, which could potentially impact the development of meaningful therapist-client relationships. While AI's accessibility is helpful in some scenarios, we must be cautious not to let it replace the genuine empathy and connection that form the bedrock of therapy. More than that, therapy is work, and much of that work should be done by the client outside of the therapy room. Constant access to a 'therapist' creates a dynamic upon which the client may come to overly rely.

Benefits of AI in Therapy


It isn't all doom and gloom. AI is clearly a marvellous new tool that will make significant positive advances in our lives, and that can absolutely apply to therapy too. Here are some things that AI is already doing that are making a huge difference:

  • Enhancing Efficiency: AI can handle routine tasks, allowing therapists to focus more on client care. There are a number of software platforms available to therapists that already integrate AI into their platform to support with the day to day running of a practice, which saves time and effort, allowing therapists to spend more time concentrating on their clients (or looking after themselves so they can better serve their clients!).
  • Supplementing Sessions: AI tools can provide supplementary support between sessions, offering clients resources like mood tracking, stress management techniques, and self-help guidance. This won't be something that therapists from all modalities will use, but those that do - for example those that work with CBT or other solution-focused therapies - will find that this supports their therapeutic process well.
  • Creative and Novel Interventions: AI is being used to introduce innovative ways of working with clients, such as virtual reality (VR) therapy for conditions like PTSD, which offer immersive experiences that traditional therapy may not be able to offer.
  • Training and Supervision: AI can provide simulations that mimic real-life scenarios that can allow therapists to work through them in an engaging way. It can also assist in between supervision sessions by offering insights and perspectives on therapy sessions.


Benefits of Humans in Therapy


We know, through comprehensive studies going back years and years, that the therapeutic alliance, or therapeutic relationship, is a key component of successful therapy. Not the interventions you use (although they do play a part), or any of the other factors that determine how you work with clients. It ultimately comes down to the quality of your relationship.

This obviously relies heavily on human qualities like empathy, compassion, and understanding that AI, in its current state, cannot replicate. Who knows if that will change in the future, but for now - as clever as it may be - it doesn't possess the capacity to create a genuine connection with a human.

There are so many other things an AI therapist can't do. It can't just sit with you in your hard moments, simply being a comforting presence while you cry. It can't offer you a box of tissues, or that much needed glass of water that helps you regulate yourself again. It can't tell if you've come to therapy in three-day-old clothes because getting undressed and dressed again just seems so hard. It can't smile with you, laugh with you, or even cry with you. It won't be genuinely delighted when you share that things are really turning around for you at home, or at work, or at school. You won't be able to navigate through the messiness that is being human together, or experience the uncomfortable but life-changing growth that comes through working through the difficulties of a relationship with another human being.

So while AI is absolutely going to change the landscape of the therapeutic profession, it's vital that therapists and clients alike realise that what makes therapy therapeutic isn't just about the words your therapist uses, or their skill in determining how to use the interventions in their toolkit - it's about the connection. And we have to find a way to preserve and enshrine that, whatever comes next.

What Comes Next?

As I mentioned earlier, we're now at the stage in which we have AI tools that can analyse speech patterns, facial expressions, and other subtle, non-verbal things that we're communicating all the time during virtual therapy sessions. This, alongside generative AI, avatars, and increasingly convincing voiceovers, means that we're likely not that far away from seeing a wholly AI "therapist" who can read your facial expressions and other non-verbal cues, talk to you like a real person, and respond to what you're telling them in real time. You might want to know what this means for our profession, and I can't answer that question (I wish I could, and I will try), but I definitely think there needs to be more conversation around this. The future is impossible to see at this point in time - AI has created an Event Horizon, not just for the counselling and psychotherapy profession, but society as a whole. It will take all of us working together to ensure that humanity is enshrined in therapy, and to educate people seeking therapy so that they know that there's more to it than just sharing your thoughts and feelings and getting a perfectly scripted response in return.

The biggest reward in therapy comes from the inherent risk of rejection by another person, and the healing that happens when you receive unconditional positive regard from someone that isn't programmed to like you. Where is the risk of rejection when you're working with a robot that can't be offended, or take a disliking to you? Without that, how can you feel truly assured in your innate goodness and lovability as a person, regardless of what you bring to therapy? I don't think you can, and that is where we will find that the AI interventions that are offered up perfectly according to the script just aren't working in the way they're expected to.

It's clear that the therapeutic landscape is going to change in ways we are only beginning to understand. AI, with its ever-advancing capabilities, promises efficiencies and innovations that may very well reshape our profession.

Our role at the Society is to gently but persistently remind the world that the heart of therapy lies in the uniquely human connection – a space where empathy, compassion, and understanding come together to create a space of healing and growth, beyond the capabilities of programming and algorithms.
We're not talking about choosing between AI and humanity, but rather finding a harmonious balance where each complements the other. As therapists, our role is evolving, not diminishing. We are the custodians of a sacred space where genuine human connections are made; where therapy isn't just about what we say and how we say it, but about how we navigate our shared human experience together.

  • Find a counsellor icon

    Find a Counsellor

    If you're looking for a counsellor, you can search our register by location or name, and you can also check whether someone is on the NCPS accredited register.

    Search the Register
  • Train a counsellor icon

    Train as a Counsellor

    Use our Find a Course tool to find the nearest training providers who offer NCPS Accredited, Advanced Specialist, Quality Checked or CPD courses. These courses are currently run across the UK.

    Find Out More